Test Report: KVM_Linux_crio 19664

                    
                      b0eadc949d6b6708e1f550519f8385f72d7fe4f5:2024-09-19:36285
                    
                

Test fail (12/203)

x
+
TestAddons/Setup (2400.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-140799 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-140799 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.958879861s)

                                                
                                                
-- stdout --
	* [addons-140799] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-140799" primary control-plane node in "addons-140799" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	  - Using image docker.io/busybox:stable
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	* Verifying ingress addon...
	* Verifying registry addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-140799 service yakd-dashboard -n yakd-dashboard
	
	* Verifying csi-hostpath-driver addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-140799 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: ingress-dns, storage-provisioner, default-storageclass, nvidia-device-plugin, metrics-server, helm-tiller, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 18:40:03.065232   15867 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:40:03.065355   15867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:40:03.065365   15867 out.go:358] Setting ErrFile to fd 2...
	I0919 18:40:03.065371   15867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:40:03.065582   15867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 18:40:03.066190   15867 out.go:352] Setting JSON to false
	I0919 18:40:03.067014   15867 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1347,"bootTime":1726769856,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:40:03.067121   15867 start.go:139] virtualization: kvm guest
	I0919 18:40:03.069162   15867 out.go:177] * [addons-140799] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:40:03.070241   15867 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:40:03.070281   15867 notify.go:220] Checking for updates...
	I0919 18:40:03.072720   15867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:40:03.073956   15867 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 18:40:03.075258   15867 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 18:40:03.076500   15867 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:40:03.077807   15867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:40:03.079137   15867 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:40:03.111136   15867 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 18:40:03.112708   15867 start.go:297] selected driver: kvm2
	I0919 18:40:03.112724   15867 start.go:901] validating driver "kvm2" against <nil>
	I0919 18:40:03.112734   15867 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:40:03.113454   15867 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:40:03.113550   15867 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 18:40:03.129784   15867 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 18:40:03.129849   15867 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:40:03.130092   15867 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:40:03.130129   15867 cni.go:84] Creating CNI manager for ""
	I0919 18:40:03.130164   15867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:40:03.130171   15867 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:40:03.130222   15867 start.go:340] cluster config:
	{Name:addons-140799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-140799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:40:03.130323   15867 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:40:03.132253   15867 out.go:177] * Starting "addons-140799" primary control-plane node in "addons-140799" cluster
	I0919 18:40:03.133747   15867 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:03.133797   15867 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 18:40:03.133808   15867 cache.go:56] Caching tarball of preloaded images
	I0919 18:40:03.133884   15867 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 18:40:03.133894   15867 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 18:40:03.134178   15867 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/config.json ...
	I0919 18:40:03.134198   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/config.json: {Name:mkabf51094c199a000619431160b50b5d0f7771a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:03.134330   15867 start.go:360] acquireMachinesLock for addons-140799: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 18:40:03.134387   15867 start.go:364] duration metric: took 44.178µs to acquireMachinesLock for "addons-140799"
	I0919 18:40:03.134404   15867 start.go:93] Provisioning new machine with config: &{Name:addons-140799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-140799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:40:03.134452   15867 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 18:40:03.136990   15867 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0919 18:40:03.137165   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:03.137207   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:03.151599   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0919 18:40:03.152028   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:03.152554   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:03.152572   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:03.152885   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:03.153052   15867 main.go:141] libmachine: (addons-140799) Calling .GetMachineName
	I0919 18:40:03.153205   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:03.153373   15867 start.go:159] libmachine.API.Create for "addons-140799" (driver="kvm2")
	I0919 18:40:03.153403   15867 client.go:168] LocalClient.Create starting
	I0919 18:40:03.153445   15867 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem
	I0919 18:40:03.248972   15867 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem
	I0919 18:40:03.345104   15867 main.go:141] libmachine: Running pre-create checks...
	I0919 18:40:03.345125   15867 main.go:141] libmachine: (addons-140799) Calling .PreCreateCheck
	I0919 18:40:03.345654   15867 main.go:141] libmachine: (addons-140799) Calling .GetConfigRaw
	I0919 18:40:03.346150   15867 main.go:141] libmachine: Creating machine...
	I0919 18:40:03.346166   15867 main.go:141] libmachine: (addons-140799) Calling .Create
	I0919 18:40:03.346341   15867 main.go:141] libmachine: (addons-140799) Creating KVM machine...
	I0919 18:40:03.347468   15867 main.go:141] libmachine: (addons-140799) DBG | found existing default KVM network
	I0919 18:40:03.348177   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:03.348019   15888 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a60}
	I0919 18:40:03.348193   15867 main.go:141] libmachine: (addons-140799) DBG | created network xml: 
	I0919 18:40:03.348206   15867 main.go:141] libmachine: (addons-140799) DBG | <network>
	I0919 18:40:03.348214   15867 main.go:141] libmachine: (addons-140799) DBG |   <name>mk-addons-140799</name>
	I0919 18:40:03.348226   15867 main.go:141] libmachine: (addons-140799) DBG |   <dns enable='no'/>
	I0919 18:40:03.348233   15867 main.go:141] libmachine: (addons-140799) DBG |   
	I0919 18:40:03.348243   15867 main.go:141] libmachine: (addons-140799) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0919 18:40:03.348253   15867 main.go:141] libmachine: (addons-140799) DBG |     <dhcp>
	I0919 18:40:03.348261   15867 main.go:141] libmachine: (addons-140799) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0919 18:40:03.348266   15867 main.go:141] libmachine: (addons-140799) DBG |     </dhcp>
	I0919 18:40:03.348270   15867 main.go:141] libmachine: (addons-140799) DBG |   </ip>
	I0919 18:40:03.348274   15867 main.go:141] libmachine: (addons-140799) DBG |   
	I0919 18:40:03.348280   15867 main.go:141] libmachine: (addons-140799) DBG | </network>
	I0919 18:40:03.348284   15867 main.go:141] libmachine: (addons-140799) DBG | 
	I0919 18:40:03.353584   15867 main.go:141] libmachine: (addons-140799) DBG | trying to create private KVM network mk-addons-140799 192.168.39.0/24...
	I0919 18:40:03.421208   15867 main.go:141] libmachine: (addons-140799) DBG | private KVM network mk-addons-140799 192.168.39.0/24 created
	I0919 18:40:03.421237   15867 main.go:141] libmachine: (addons-140799) Setting up store path in /home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799 ...
	I0919 18:40:03.421258   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:03.421142   15888 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 18:40:03.421276   15867 main.go:141] libmachine: (addons-140799) Building disk image from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 18:40:03.421300   15867 main.go:141] libmachine: (addons-140799) Downloading /home/jenkins/minikube-integration/19664-7917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0919 18:40:03.680746   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:03.680614   15888 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa...
	I0919 18:40:03.728039   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:03.727896   15888 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/addons-140799.rawdisk...
	I0919 18:40:03.728074   15867 main.go:141] libmachine: (addons-140799) DBG | Writing magic tar header
	I0919 18:40:03.728088   15867 main.go:141] libmachine: (addons-140799) DBG | Writing SSH key tar header
	I0919 18:40:03.728177   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:03.728084   15888 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799 ...
	I0919 18:40:03.728199   15867 main.go:141] libmachine: (addons-140799) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799
	I0919 18:40:03.728241   15867 main.go:141] libmachine: (addons-140799) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines
	I0919 18:40:03.728270   15867 main.go:141] libmachine: (addons-140799) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 18:40:03.728279   15867 main.go:141] libmachine: (addons-140799) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917
	I0919 18:40:03.728299   15867 main.go:141] libmachine: (addons-140799) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799 (perms=drwx------)
	I0919 18:40:03.728317   15867 main.go:141] libmachine: (addons-140799) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines (perms=drwxr-xr-x)
	I0919 18:40:03.728329   15867 main.go:141] libmachine: (addons-140799) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 18:40:03.728342   15867 main.go:141] libmachine: (addons-140799) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube (perms=drwxr-xr-x)
	I0919 18:40:03.728352   15867 main.go:141] libmachine: (addons-140799) DBG | Checking permissions on dir: /home/jenkins
	I0919 18:40:03.728373   15867 main.go:141] libmachine: (addons-140799) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917 (perms=drwxrwxr-x)
	I0919 18:40:03.728389   15867 main.go:141] libmachine: (addons-140799) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 18:40:03.728400   15867 main.go:141] libmachine: (addons-140799) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 18:40:03.728415   15867 main.go:141] libmachine: (addons-140799) Creating domain...
	I0919 18:40:03.728430   15867 main.go:141] libmachine: (addons-140799) DBG | Checking permissions on dir: /home
	I0919 18:40:03.728442   15867 main.go:141] libmachine: (addons-140799) DBG | Skipping /home - not owner
	I0919 18:40:03.729412   15867 main.go:141] libmachine: (addons-140799) define libvirt domain using xml: 
	I0919 18:40:03.729444   15867 main.go:141] libmachine: (addons-140799) <domain type='kvm'>
	I0919 18:40:03.729456   15867 main.go:141] libmachine: (addons-140799)   <name>addons-140799</name>
	I0919 18:40:03.729472   15867 main.go:141] libmachine: (addons-140799)   <memory unit='MiB'>4000</memory>
	I0919 18:40:03.729481   15867 main.go:141] libmachine: (addons-140799)   <vcpu>2</vcpu>
	I0919 18:40:03.729491   15867 main.go:141] libmachine: (addons-140799)   <features>
	I0919 18:40:03.729513   15867 main.go:141] libmachine: (addons-140799)     <acpi/>
	I0919 18:40:03.729532   15867 main.go:141] libmachine: (addons-140799)     <apic/>
	I0919 18:40:03.729541   15867 main.go:141] libmachine: (addons-140799)     <pae/>
	I0919 18:40:03.729548   15867 main.go:141] libmachine: (addons-140799)     
	I0919 18:40:03.729559   15867 main.go:141] libmachine: (addons-140799)   </features>
	I0919 18:40:03.729568   15867 main.go:141] libmachine: (addons-140799)   <cpu mode='host-passthrough'>
	I0919 18:40:03.729578   15867 main.go:141] libmachine: (addons-140799)   
	I0919 18:40:03.729589   15867 main.go:141] libmachine: (addons-140799)   </cpu>
	I0919 18:40:03.729600   15867 main.go:141] libmachine: (addons-140799)   <os>
	I0919 18:40:03.729614   15867 main.go:141] libmachine: (addons-140799)     <type>hvm</type>
	I0919 18:40:03.729626   15867 main.go:141] libmachine: (addons-140799)     <boot dev='cdrom'/>
	I0919 18:40:03.729636   15867 main.go:141] libmachine: (addons-140799)     <boot dev='hd'/>
	I0919 18:40:03.729647   15867 main.go:141] libmachine: (addons-140799)     <bootmenu enable='no'/>
	I0919 18:40:03.729656   15867 main.go:141] libmachine: (addons-140799)   </os>
	I0919 18:40:03.729664   15867 main.go:141] libmachine: (addons-140799)   <devices>
	I0919 18:40:03.729682   15867 main.go:141] libmachine: (addons-140799)     <disk type='file' device='cdrom'>
	I0919 18:40:03.729697   15867 main.go:141] libmachine: (addons-140799)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/boot2docker.iso'/>
	I0919 18:40:03.729709   15867 main.go:141] libmachine: (addons-140799)       <target dev='hdc' bus='scsi'/>
	I0919 18:40:03.729717   15867 main.go:141] libmachine: (addons-140799)       <readonly/>
	I0919 18:40:03.729735   15867 main.go:141] libmachine: (addons-140799)     </disk>
	I0919 18:40:03.729747   15867 main.go:141] libmachine: (addons-140799)     <disk type='file' device='disk'>
	I0919 18:40:03.729761   15867 main.go:141] libmachine: (addons-140799)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 18:40:03.729780   15867 main.go:141] libmachine: (addons-140799)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/addons-140799.rawdisk'/>
	I0919 18:40:03.729791   15867 main.go:141] libmachine: (addons-140799)       <target dev='hda' bus='virtio'/>
	I0919 18:40:03.729798   15867 main.go:141] libmachine: (addons-140799)     </disk>
	I0919 18:40:03.729810   15867 main.go:141] libmachine: (addons-140799)     <interface type='network'>
	I0919 18:40:03.729821   15867 main.go:141] libmachine: (addons-140799)       <source network='mk-addons-140799'/>
	I0919 18:40:03.729830   15867 main.go:141] libmachine: (addons-140799)       <model type='virtio'/>
	I0919 18:40:03.729843   15867 main.go:141] libmachine: (addons-140799)     </interface>
	I0919 18:40:03.729854   15867 main.go:141] libmachine: (addons-140799)     <interface type='network'>
	I0919 18:40:03.729865   15867 main.go:141] libmachine: (addons-140799)       <source network='default'/>
	I0919 18:40:03.729878   15867 main.go:141] libmachine: (addons-140799)       <model type='virtio'/>
	I0919 18:40:03.729895   15867 main.go:141] libmachine: (addons-140799)     </interface>
	I0919 18:40:03.729912   15867 main.go:141] libmachine: (addons-140799)     <serial type='pty'>
	I0919 18:40:03.729924   15867 main.go:141] libmachine: (addons-140799)       <target port='0'/>
	I0919 18:40:03.729933   15867 main.go:141] libmachine: (addons-140799)     </serial>
	I0919 18:40:03.729944   15867 main.go:141] libmachine: (addons-140799)     <console type='pty'>
	I0919 18:40:03.729960   15867 main.go:141] libmachine: (addons-140799)       <target type='serial' port='0'/>
	I0919 18:40:03.729978   15867 main.go:141] libmachine: (addons-140799)     </console>
	I0919 18:40:03.729992   15867 main.go:141] libmachine: (addons-140799)     <rng model='virtio'>
	I0919 18:40:03.730006   15867 main.go:141] libmachine: (addons-140799)       <backend model='random'>/dev/random</backend>
	I0919 18:40:03.730015   15867 main.go:141] libmachine: (addons-140799)     </rng>
	I0919 18:40:03.730025   15867 main.go:141] libmachine: (addons-140799)     
	I0919 18:40:03.730033   15867 main.go:141] libmachine: (addons-140799)     
	I0919 18:40:03.730041   15867 main.go:141] libmachine: (addons-140799)   </devices>
	I0919 18:40:03.730050   15867 main.go:141] libmachine: (addons-140799) </domain>
	I0919 18:40:03.730069   15867 main.go:141] libmachine: (addons-140799) 
	I0919 18:40:03.735696   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:ed:7f:c8 in network default
	I0919 18:40:03.736282   15867 main.go:141] libmachine: (addons-140799) Ensuring networks are active...
	I0919 18:40:03.736302   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:03.736874   15867 main.go:141] libmachine: (addons-140799) Ensuring network default is active
	I0919 18:40:03.737202   15867 main.go:141] libmachine: (addons-140799) Ensuring network mk-addons-140799 is active
	I0919 18:40:03.738600   15867 main.go:141] libmachine: (addons-140799) Getting domain xml...
	I0919 18:40:03.739276   15867 main.go:141] libmachine: (addons-140799) Creating domain...
	I0919 18:40:05.140126   15867 main.go:141] libmachine: (addons-140799) Waiting to get IP...
	I0919 18:40:05.140806   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:05.141193   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:05.141247   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:05.141193   15888 retry.go:31] will retry after 211.550294ms: waiting for machine to come up
	I0919 18:40:05.354534   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:05.354993   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:05.355028   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:05.354947   15888 retry.go:31] will retry after 251.94347ms: waiting for machine to come up
	I0919 18:40:05.608364   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:05.608751   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:05.608787   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:05.608733   15888 retry.go:31] will retry after 320.586428ms: waiting for machine to come up
	I0919 18:40:05.931082   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:05.931459   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:05.931483   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:05.931412   15888 retry.go:31] will retry after 403.960365ms: waiting for machine to come up
	I0919 18:40:06.336972   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:06.337446   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:06.337476   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:06.337384   15888 retry.go:31] will retry after 687.539282ms: waiting for machine to come up
	I0919 18:40:07.026074   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:07.026436   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:07.026459   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:07.026393   15888 retry.go:31] will retry after 788.158651ms: waiting for machine to come up
	I0919 18:40:07.815744   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:07.816092   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:07.816145   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:07.816069   15888 retry.go:31] will retry after 900.519516ms: waiting for machine to come up
	I0919 18:40:08.718649   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:08.719038   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:08.719076   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:08.719012   15888 retry.go:31] will retry after 1.247363728s: waiting for machine to come up
	I0919 18:40:09.968489   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:09.968851   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:09.968881   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:09.968836   15888 retry.go:31] will retry after 1.814584088s: waiting for machine to come up
	I0919 18:40:11.785882   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:11.786305   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:11.786325   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:11.786263   15888 retry.go:31] will retry after 1.801717404s: waiting for machine to come up
	I0919 18:40:13.589338   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:13.589788   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:13.589815   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:13.589755   15888 retry.go:31] will retry after 2.891732075s: waiting for machine to come up
	I0919 18:40:16.482531   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:16.482950   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:16.482966   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:16.482916   15888 retry.go:31] will retry after 3.021154535s: waiting for machine to come up
	I0919 18:40:19.506098   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:19.506462   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:19.506485   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:19.506423   15888 retry.go:31] will retry after 3.676080126s: waiting for machine to come up
	I0919 18:40:23.184376   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:23.184745   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find current IP address of domain addons-140799 in network mk-addons-140799
	I0919 18:40:23.184780   15867 main.go:141] libmachine: (addons-140799) DBG | I0919 18:40:23.184705   15888 retry.go:31] will retry after 4.004680212s: waiting for machine to come up
	I0919 18:40:27.190488   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.190885   15867 main.go:141] libmachine: (addons-140799) Found IP for machine: 192.168.39.11
	I0919 18:40:27.190907   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has current primary IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.190916   15867 main.go:141] libmachine: (addons-140799) Reserving static IP address...
	I0919 18:40:27.191247   15867 main.go:141] libmachine: (addons-140799) DBG | unable to find host DHCP lease matching {name: "addons-140799", mac: "52:54:00:f1:93:a9", ip: "192.168.39.11"} in network mk-addons-140799
	I0919 18:40:27.261381   15867 main.go:141] libmachine: (addons-140799) DBG | Getting to WaitForSSH function...
	I0919 18:40:27.261403   15867 main.go:141] libmachine: (addons-140799) Reserved static IP address: 192.168.39.11
	I0919 18:40:27.261415   15867 main.go:141] libmachine: (addons-140799) Waiting for SSH to be available...
	I0919 18:40:27.263988   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.264408   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:27.264449   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.264577   15867 main.go:141] libmachine: (addons-140799) DBG | Using SSH client type: external
	I0919 18:40:27.264600   15867 main.go:141] libmachine: (addons-140799) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa (-rw-------)
	I0919 18:40:27.264617   15867 main.go:141] libmachine: (addons-140799) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 18:40:27.264628   15867 main.go:141] libmachine: (addons-140799) DBG | About to run SSH command:
	I0919 18:40:27.264635   15867 main.go:141] libmachine: (addons-140799) DBG | exit 0
	I0919 18:40:27.397328   15867 main.go:141] libmachine: (addons-140799) DBG | SSH cmd err, output: <nil>: 
	I0919 18:40:27.397603   15867 main.go:141] libmachine: (addons-140799) KVM machine creation complete!
	I0919 18:40:27.397957   15867 main.go:141] libmachine: (addons-140799) Calling .GetConfigRaw
	I0919 18:40:27.398508   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:27.398680   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:27.398840   15867 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 18:40:27.398853   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:27.400059   15867 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 18:40:27.400070   15867 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 18:40:27.400075   15867 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 18:40:27.400082   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:27.402548   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.402931   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:27.402945   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.403062   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:27.403234   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:27.403384   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:27.403515   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:27.403683   15867 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:27.403865   15867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 18:40:27.403876   15867 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 18:40:27.508342   15867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:40:27.508367   15867 main.go:141] libmachine: Detecting the provisioner...
	I0919 18:40:27.508376   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:27.511167   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.511532   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:27.511554   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.511742   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:27.511911   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:27.512090   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:27.512252   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:27.512394   15867 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:27.512573   15867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 18:40:27.512585   15867 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 18:40:27.617972   15867 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0919 18:40:27.618054   15867 main.go:141] libmachine: found compatible host: buildroot
	I0919 18:40:27.618064   15867 main.go:141] libmachine: Provisioning with buildroot...
	I0919 18:40:27.618074   15867 main.go:141] libmachine: (addons-140799) Calling .GetMachineName
	I0919 18:40:27.618299   15867 buildroot.go:166] provisioning hostname "addons-140799"
	I0919 18:40:27.618327   15867 main.go:141] libmachine: (addons-140799) Calling .GetMachineName
	I0919 18:40:27.618530   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:27.621317   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.621641   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:27.621661   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.621806   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:27.621981   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:27.622119   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:27.622243   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:27.622399   15867 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:27.622612   15867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 18:40:27.622634   15867 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-140799 && echo "addons-140799" | sudo tee /etc/hostname
	I0919 18:40:27.740012   15867 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-140799
	
	I0919 18:40:27.740039   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:27.742682   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.743026   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:27.743054   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.743185   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:27.743364   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:27.743516   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:27.743648   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:27.743779   15867 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:27.743955   15867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 18:40:27.743972   15867 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-140799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-140799/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-140799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:40:27.857997   15867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:40:27.858025   15867 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 18:40:27.858044   15867 buildroot.go:174] setting up certificates
	I0919 18:40:27.858056   15867 provision.go:84] configureAuth start
	I0919 18:40:27.858066   15867 main.go:141] libmachine: (addons-140799) Calling .GetMachineName
	I0919 18:40:27.858304   15867 main.go:141] libmachine: (addons-140799) Calling .GetIP
	I0919 18:40:27.861243   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.861586   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:27.861612   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.861813   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:27.863871   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.864172   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:27.864184   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:27.864315   15867 provision.go:143] copyHostCerts
	I0919 18:40:27.864378   15867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 18:40:27.864553   15867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 18:40:27.864635   15867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 18:40:27.864697   15867 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.addons-140799 san=[127.0.0.1 192.168.39.11 addons-140799 localhost minikube]
	I0919 18:40:28.065848   15867 provision.go:177] copyRemoteCerts
	I0919 18:40:28.065903   15867 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:40:28.065925   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:28.068642   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.068955   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:28.068971   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.069139   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:28.069299   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:28.069436   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:28.069552   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:28.151701   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 18:40:28.175711   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 18:40:28.198615   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 18:40:28.221817   15867 provision.go:87] duration metric: took 363.7459ms to configureAuth
	I0919 18:40:28.221855   15867 buildroot.go:189] setting minikube options for container-runtime
	I0919 18:40:28.222081   15867 config.go:182] Loaded profile config "addons-140799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:28.222177   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:28.224857   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.225245   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:28.225272   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.225482   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:28.225652   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:28.225842   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:28.226004   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:28.226180   15867 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:28.226338   15867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 18:40:28.226351   15867 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 18:40:28.452831   15867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 18:40:28.452857   15867 main.go:141] libmachine: Checking connection to Docker...
	I0919 18:40:28.452865   15867 main.go:141] libmachine: (addons-140799) Calling .GetURL
	I0919 18:40:28.454345   15867 main.go:141] libmachine: (addons-140799) DBG | Using libvirt version 6000000
	I0919 18:40:28.456477   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.456822   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:28.456846   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.456952   15867 main.go:141] libmachine: Docker is up and running!
	I0919 18:40:28.456966   15867 main.go:141] libmachine: Reticulating splines...
	I0919 18:40:28.456972   15867 client.go:171] duration metric: took 25.303561236s to LocalClient.Create
	I0919 18:40:28.456991   15867 start.go:167] duration metric: took 25.303621227s to libmachine.API.Create "addons-140799"
	I0919 18:40:28.457000   15867 start.go:293] postStartSetup for "addons-140799" (driver="kvm2")
	I0919 18:40:28.457010   15867 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:40:28.457026   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:28.457246   15867 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:40:28.457271   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:28.459308   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.459635   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:28.459664   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.459826   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:28.459992   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:28.460188   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:28.460313   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:28.543964   15867 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:40:28.548383   15867 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 18:40:28.548410   15867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 18:40:28.548478   15867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 18:40:28.548501   15867 start.go:296] duration metric: took 91.495795ms for postStartSetup
	I0919 18:40:28.548530   15867 main.go:141] libmachine: (addons-140799) Calling .GetConfigRaw
	I0919 18:40:28.549095   15867 main.go:141] libmachine: (addons-140799) Calling .GetIP
	I0919 18:40:28.551490   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.551829   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:28.551864   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.552092   15867 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/config.json ...
	I0919 18:40:28.552265   15867 start.go:128] duration metric: took 25.417804295s to createHost
	I0919 18:40:28.552284   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:28.554483   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.554763   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:28.554781   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.554928   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:28.555084   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:28.555222   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:28.555326   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:28.555491   15867 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:28.555653   15867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 18:40:28.555662   15867 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 18:40:28.662012   15867 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726771228.637721817
	
	I0919 18:40:28.662040   15867 fix.go:216] guest clock: 1726771228.637721817
	I0919 18:40:28.662048   15867 fix.go:229] Guest: 2024-09-19 18:40:28.637721817 +0000 UTC Remote: 2024-09-19 18:40:28.552274764 +0000 UTC m=+25.521925365 (delta=85.447053ms)
	I0919 18:40:28.662069   15867 fix.go:200] guest clock delta is within tolerance: 85.447053ms
	I0919 18:40:28.662074   15867 start.go:83] releasing machines lock for "addons-140799", held for 25.527677686s
	I0919 18:40:28.662094   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:28.662325   15867 main.go:141] libmachine: (addons-140799) Calling .GetIP
	I0919 18:40:28.664882   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.665183   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:28.665221   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.665381   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:28.665853   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:28.666025   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:28.666134   15867 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:40:28.666227   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:28.666229   15867 ssh_runner.go:195] Run: cat /version.json
	I0919 18:40:28.666270   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:28.668542   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.669149   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:28.669191   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.669758   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:28.669840   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.669953   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:28.670102   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:28.670183   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:28.670216   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:28.670281   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:28.670365   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:28.670553   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:28.670721   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:28.670861   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:28.775306   15867 ssh_runner.go:195] Run: systemctl --version
	I0919 18:40:28.781281   15867 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 18:40:28.938649   15867 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 18:40:28.945132   15867 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 18:40:28.945220   15867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:40:28.960709   15867 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 18:40:28.960734   15867 start.go:495] detecting cgroup driver to use...
	I0919 18:40:28.960807   15867 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 18:40:28.980244   15867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 18:40:28.995155   15867 docker.go:217] disabling cri-docker service (if available) ...
	I0919 18:40:28.995209   15867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 18:40:29.009386   15867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 18:40:29.023139   15867 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 18:40:29.134407   15867 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 18:40:29.298579   15867 docker.go:233] disabling docker service ...
	I0919 18:40:29.298644   15867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 18:40:29.312931   15867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 18:40:29.325818   15867 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 18:40:29.456801   15867 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 18:40:29.578696   15867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 18:40:29.592719   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:40:29.610982   15867 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 18:40:29.611059   15867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:29.620876   15867 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 18:40:29.620943   15867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:29.630913   15867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:29.640752   15867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:29.650904   15867 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:40:29.661219   15867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:29.671638   15867 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:29.688467   15867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:29.698317   15867 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:40:29.707591   15867 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 18:40:29.707653   15867 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 18:40:29.721694   15867 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:40:29.731406   15867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:29.853825   15867 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 18:40:29.959706   15867 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 18:40:29.959782   15867 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 18:40:29.964658   15867 start.go:563] Will wait 60s for crictl version
	I0919 18:40:29.964724   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:40:29.968571   15867 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:40:30.006284   15867 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 18:40:30.006400   15867 ssh_runner.go:195] Run: crio --version
	I0919 18:40:30.036128   15867 ssh_runner.go:195] Run: crio --version
	I0919 18:40:30.066489   15867 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 18:40:30.068091   15867 main.go:141] libmachine: (addons-140799) Calling .GetIP
	I0919 18:40:30.070905   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:30.071287   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:30.071314   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:30.071517   15867 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 18:40:30.075557   15867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:40:30.087944   15867 kubeadm.go:883] updating cluster {Name:addons-140799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-140799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:40:30.088047   15867 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:30.088088   15867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:40:30.119482   15867 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0919 18:40:30.119555   15867 ssh_runner.go:195] Run: which lz4
	I0919 18:40:30.123498   15867 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 18:40:30.127510   15867 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 18:40:30.127534   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0919 18:40:31.408726   15867 crio.go:462] duration metric: took 1.285256069s to copy over tarball
	I0919 18:40:31.408793   15867 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 18:40:33.519535   15867 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.110719052s)
	I0919 18:40:33.519563   15867 crio.go:469] duration metric: took 2.110810578s to extract the tarball
	I0919 18:40:33.519571   15867 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 18:40:33.558022   15867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:40:33.597692   15867 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:40:33.597723   15867 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:40:33.597730   15867 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.31.1 crio true true} ...
	I0919 18:40:33.597829   15867 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-140799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-140799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:40:33.597894   15867 ssh_runner.go:195] Run: crio config
	I0919 18:40:33.647469   15867 cni.go:84] Creating CNI manager for ""
	I0919 18:40:33.647502   15867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:40:33.647515   15867 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:40:33.647541   15867 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-140799 NodeName:addons-140799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:40:33.647668   15867 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-140799"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:40:33.647731   15867 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:40:33.657970   15867 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:40:33.658035   15867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:40:33.667703   15867 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 18:40:33.684097   15867 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:40:33.700520   15867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0919 18:40:33.716917   15867 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0919 18:40:33.720988   15867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:40:33.733338   15867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:33.847327   15867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:40:33.866031   15867 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799 for IP: 192.168.39.11
	I0919 18:40:33.866059   15867 certs.go:194] generating shared ca certs ...
	I0919 18:40:33.866085   15867 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:33.866265   15867 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 18:40:33.951296   15867 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt ...
	I0919 18:40:33.951324   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt: {Name:mkfb352c0db47244a2063a5750d8d5eff3be313b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:33.951490   15867 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key ...
	I0919 18:40:33.951501   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key: {Name:mk77aa09d1738ce5a15885425f91951fa061297e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:33.951569   15867 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 18:40:34.189821   15867 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt ...
	I0919 18:40:34.189850   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt: {Name:mk5809fcd157a0760af6e9e3e9b2dfa45f7831d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:34.189999   15867 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key ...
	I0919 18:40:34.190011   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key: {Name:mk558aa7d6e41519abc4f66d2c3c3d714a9a5d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:34.190105   15867 certs.go:256] generating profile certs ...
	I0919 18:40:34.190165   15867 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/client.key
	I0919 18:40:34.190179   15867 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/client.crt with IP's: []
	I0919 18:40:34.306287   15867 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/client.crt ...
	I0919 18:40:34.306316   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/client.crt: {Name:mk94717e7316b620bc75928dcbb3f129741b105b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:34.306471   15867 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/client.key ...
	I0919 18:40:34.306482   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/client.key: {Name:mk5258372f6409329c29eddaae8d796941de379a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:34.306548   15867 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.key.6342b11f
	I0919 18:40:34.306564   15867 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.crt.6342b11f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.11]
	I0919 18:40:34.490967   15867 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.crt.6342b11f ...
	I0919 18:40:34.490999   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.crt.6342b11f: {Name:mk2d3274b6a74e5806a59441b19008158101db07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:34.491154   15867 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.key.6342b11f ...
	I0919 18:40:34.491166   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.key.6342b11f: {Name:mkd8cd9ad348c17f01b41d37dd69f69e0ffdba88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:34.491670   15867 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.crt.6342b11f -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.crt
	I0919 18:40:34.491749   15867 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.key.6342b11f -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.key
	I0919 18:40:34.491794   15867 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/proxy-client.key
	I0919 18:40:34.491812   15867 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/proxy-client.crt with IP's: []
	I0919 18:40:34.687022   15867 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/proxy-client.crt ...
	I0919 18:40:34.687051   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/proxy-client.crt: {Name:mk1b5385aaabb4eb9c372a35b69bb35a590fd715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:34.687202   15867 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/proxy-client.key ...
	I0919 18:40:34.687212   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/proxy-client.key: {Name:mkeef5bcd6f43af6c48fe0c4c331a000c3c0e1dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:34.687420   15867 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 18:40:34.687460   15867 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 18:40:34.687484   15867 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:40:34.687507   15867 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 18:40:34.688054   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:40:34.712091   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 18:40:34.741316   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:40:34.766373   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 18:40:34.790693   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:40:34.815483   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 18:40:34.841020   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:40:34.864878   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/addons-140799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 18:40:34.889590   15867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:40:34.913688   15867 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:40:34.934012   15867 ssh_runner.go:195] Run: openssl version
	I0919 18:40:34.940515   15867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:40:34.953389   15867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:34.958217   15867 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:34.958272   15867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:34.964289   15867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:40:34.975706   15867 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:40:34.980141   15867 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:40:34.980190   15867 kubeadm.go:392] StartCluster: {Name:addons-140799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-140799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:40:34.980256   15867 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 18:40:34.980325   15867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 18:40:35.017394   15867 cri.go:89] found id: ""
	I0919 18:40:35.017480   15867 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:40:35.029204   15867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:40:35.039472   15867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:40:35.049505   15867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:40:35.049526   15867 kubeadm.go:157] found existing configuration files:
	
	I0919 18:40:35.049578   15867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:40:35.058544   15867 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:40:35.058609   15867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:40:35.068226   15867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:40:35.077646   15867 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:40:35.077711   15867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:40:35.087158   15867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:40:35.096564   15867 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:40:35.096619   15867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:40:35.106604   15867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:40:35.116172   15867 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:40:35.116236   15867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:40:35.125563   15867 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 18:40:35.179723   15867 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:40:35.179972   15867 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:40:35.288259   15867 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:40:35.288377   15867 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:40:35.288500   15867 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:40:35.296885   15867 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:40:35.448985   15867 out.go:235]   - Generating certificates and keys ...
	I0919 18:40:35.449156   15867 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:40:35.449255   15867 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:40:35.554894   15867 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:40:35.623643   15867 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:40:35.732568   15867 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:40:35.790211   15867 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:40:35.920175   15867 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:40:35.920377   15867 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-140799 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I0919 18:40:36.143792   15867 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:40:36.144001   15867 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-140799 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I0919 18:40:36.373531   15867 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:40:36.548549   15867 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:40:36.679748   15867 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:40:36.679852   15867 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:40:36.773342   15867 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:40:36.933020   15867 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:40:37.187935   15867 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:40:37.299871   15867 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:40:37.482832   15867 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:40:37.483459   15867 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:40:37.488228   15867 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:40:37.490027   15867 out.go:235]   - Booting up control plane ...
	I0919 18:40:37.490159   15867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:40:37.490266   15867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:40:37.490544   15867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:40:37.504980   15867 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:40:37.511526   15867 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:40:37.511592   15867 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:40:37.638731   15867 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:40:37.638882   15867 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:40:38.139566   15867 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.573316ms
	I0919 18:40:38.139688   15867 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:40:43.138468   15867 kubeadm.go:310] [api-check] The API server is healthy after 5.001518939s
	I0919 18:40:43.149640   15867 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:40:43.167037   15867 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:40:43.187860   15867 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:40:43.188089   15867 kubeadm.go:310] [mark-control-plane] Marking the node addons-140799 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:40:43.199682   15867 kubeadm.go:310] [bootstrap-token] Using token: 4cy2xp.fvvlw0e4sjrqypwm
	I0919 18:40:43.201095   15867 out.go:235]   - Configuring RBAC rules ...
	I0919 18:40:43.201225   15867 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:40:43.205481   15867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:40:43.212244   15867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:40:43.215017   15867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:40:43.220565   15867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:40:43.224007   15867 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:40:43.547997   15867 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:40:43.970309   15867 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:40:44.546682   15867 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:40:44.546708   15867 kubeadm.go:310] 
	I0919 18:40:44.546803   15867 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:40:44.546823   15867 kubeadm.go:310] 
	I0919 18:40:44.546905   15867 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:40:44.546913   15867 kubeadm.go:310] 
	I0919 18:40:44.546934   15867 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:40:44.546983   15867 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:40:44.547030   15867 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:40:44.547048   15867 kubeadm.go:310] 
	I0919 18:40:44.547122   15867 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:40:44.547131   15867 kubeadm.go:310] 
	I0919 18:40:44.547194   15867 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:40:44.547201   15867 kubeadm.go:310] 
	I0919 18:40:44.547266   15867 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:40:44.547345   15867 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:40:44.547404   15867 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:40:44.547410   15867 kubeadm.go:310] 
	I0919 18:40:44.547533   15867 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:40:44.547649   15867 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:40:44.547661   15867 kubeadm.go:310] 
	I0919 18:40:44.547751   15867 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4cy2xp.fvvlw0e4sjrqypwm \
	I0919 18:40:44.547878   15867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 \
	I0919 18:40:44.547912   15867 kubeadm.go:310] 	--control-plane 
	I0919 18:40:44.547921   15867 kubeadm.go:310] 
	I0919 18:40:44.548034   15867 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:40:44.548044   15867 kubeadm.go:310] 
	I0919 18:40:44.548158   15867 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4cy2xp.fvvlw0e4sjrqypwm \
	I0919 18:40:44.548288   15867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 
	I0919 18:40:44.549277   15867 kubeadm.go:310] W0919 18:40:35.159840     820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:44.549567   15867 kubeadm.go:310] W0919 18:40:35.161096     820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:44.549715   15867 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:40:44.549732   15867 cni.go:84] Creating CNI manager for ""
	I0919 18:40:44.549741   15867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:40:44.551458   15867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 18:40:44.552829   15867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 18:40:44.566566   15867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 18:40:44.584456   15867 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:40:44.584555   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:44.584592   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-140799 minikube.k8s.io/updated_at=2024_09_19T18_40_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-140799 minikube.k8s.io/primary=true
	I0919 18:40:44.615592   15867 ops.go:34] apiserver oom_adj: -16
	I0919 18:40:44.727857   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:45.228229   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:45.727983   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:46.228085   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:46.728526   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:47.228645   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:47.729017   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:48.228958   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:48.728173   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:49.228078   15867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:49.369192   15867 kubeadm.go:1113] duration metric: took 4.784703554s to wait for elevateKubeSystemPrivileges
	I0919 18:40:49.369235   15867 kubeadm.go:394] duration metric: took 14.38904832s to StartCluster
	I0919 18:40:49.369258   15867 settings.go:142] acquiring lock: {Name:mk58f627f177d13dd5c0d47e681e886cab43cce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:49.369417   15867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 18:40:49.369831   15867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/kubeconfig: {Name:mk632e082e805bb0ee3f336087f78588814f24af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:49.370023   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:40:49.370034   15867 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:40:49.370095   15867 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:40:49.370198   15867 addons.go:69] Setting yakd=true in profile "addons-140799"
	I0919 18:40:49.370201   15867 addons.go:69] Setting inspektor-gadget=true in profile "addons-140799"
	I0919 18:40:49.370199   15867 addons.go:69] Setting cloud-spanner=true in profile "addons-140799"
	I0919 18:40:49.370231   15867 config.go:182] Loaded profile config "addons-140799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:49.370247   15867 addons.go:234] Setting addon inspektor-gadget=true in "addons-140799"
	I0919 18:40:49.370240   15867 addons.go:69] Setting storage-provisioner=true in profile "addons-140799"
	I0919 18:40:49.370248   15867 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-140799"
	I0919 18:40:49.370260   15867 addons.go:69] Setting volumesnapshots=true in profile "addons-140799"
	I0919 18:40:49.370263   15867 addons.go:69] Setting gcp-auth=true in profile "addons-140799"
	I0919 18:40:49.370265   15867 addons.go:234] Setting addon storage-provisioner=true in "addons-140799"
	I0919 18:40:49.370271   15867 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-140799"
	I0919 18:40:49.370274   15867 addons.go:234] Setting addon volumesnapshots=true in "addons-140799"
	I0919 18:40:49.370234   15867 addons.go:234] Setting addon yakd=true in "addons-140799"
	I0919 18:40:49.370286   15867 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-140799"
	I0919 18:40:49.370296   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.370303   15867 addons.go:69] Setting metrics-server=true in profile "addons-140799"
	I0919 18:40:49.370308   15867 addons.go:69] Setting ingress-dns=true in profile "addons-140799"
	I0919 18:40:49.370312   15867 addons.go:69] Setting default-storageclass=true in profile "addons-140799"
	I0919 18:40:49.370317   15867 addons.go:234] Setting addon metrics-server=true in "addons-140799"
	I0919 18:40:49.370320   15867 addons.go:234] Setting addon ingress-dns=true in "addons-140799"
	I0919 18:40:49.370323   15867 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-140799"
	I0919 18:40:49.370330   15867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-140799"
	I0919 18:40:49.370346   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.370349   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.370291   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.370312   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.370297   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.370759   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.370772   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.370785   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.370791   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.370801   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.370804   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.370824   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.370251   15867 addons.go:69] Setting volcano=true in profile "addons-140799"
	I0919 18:40:49.370335   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.370854   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.370865   15867 addons.go:234] Setting addon volcano=true in "addons-140799"
	I0919 18:40:49.370304   15867 addons.go:69] Setting ingress=true in profile "addons-140799"
	I0919 18:40:49.370759   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.370243   15867 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-140799"
	I0919 18:40:49.370908   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.370913   15867 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-140799"
	I0919 18:40:49.370306   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.371082   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.371102   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.370282   15867 mustload.go:65] Loading cluster: addons-140799
	I0919 18:40:49.370255   15867 addons.go:69] Setting registry=true in profile "addons-140799"
	I0919 18:40:49.371161   15867 addons.go:234] Setting addon registry=true in "addons-140799"
	I0919 18:40:49.370759   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.371179   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.370884   15867 addons.go:234] Setting addon ingress=true in "addons-140799"
	I0919 18:40:49.370296   15867 addons.go:69] Setting helm-tiller=true in profile "addons-140799"
	I0919 18:40:49.371218   15867 addons.go:234] Setting addon helm-tiller=true in "addons-140799"
	I0919 18:40:49.370241   15867 addons.go:234] Setting addon cloud-spanner=true in "addons-140799"
	I0919 18:40:49.371322   15867 config.go:182] Loaded profile config "addons-140799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:49.371369   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.371441   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.371472   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.371480   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.371517   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.371536   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.371594   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.371960   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.371985   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.371985   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.372028   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.372045   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.372064   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.372086   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.372120   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.372138   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.372232   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.372257   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.372322   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.372336   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.372356   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.389486   15867 out.go:177] * Verifying Kubernetes components...
	I0919 18:40:49.392899   15867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:49.393186   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0919 18:40:49.393864   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.393906   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.396311   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0919 18:40:49.396413   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I0919 18:40:49.396477   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0919 18:40:49.396538   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I0919 18:40:49.396605   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38825
	I0919 18:40:49.397042   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.397623   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.397643   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.397702   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.397798   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.397866   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.397920   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.397979   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.398586   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.398639   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.398655   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.398741   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.398756   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.398771   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.398780   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.398881   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.398890   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.398928   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.398951   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.399134   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.399205   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.399599   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.399621   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.399633   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.399650   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.400137   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.400156   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.400137   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.400579   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.400615   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.400694   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.400720   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.408606   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.409102   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.409150   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.413323   15867 addons.go:234] Setting addon default-storageclass=true in "addons-140799"
	I0919 18:40:49.413366   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.413730   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.413751   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.415555   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0919 18:40:49.419361   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0919 18:40:49.419849   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.419994   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0919 18:40:49.420392   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.420411   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.421003   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.421143   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.421241   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.421676   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.421693   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.422046   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.422617   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.422674   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.423982   15867 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-140799"
	I0919 18:40:49.424056   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.424437   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.424487   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.434660   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42055
	I0919 18:40:49.435158   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.435710   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.435736   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.436039   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.436574   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.436612   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.439701   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I0919 18:40:49.440200   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.440743   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.440760   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.441153   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.441773   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.441810   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.441997   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I0919 18:40:49.442134   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35485
	I0919 18:40:49.442310   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.442653   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.443415   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.443432   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.443557   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.443567   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.443893   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.443955   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.444542   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.444582   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.444813   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.445419   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37119
	I0919 18:40:49.445870   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.446382   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.446398   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.446761   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.446812   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:49.447162   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.447185   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.447389   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0919 18:40:49.447896   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.448021   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.448050   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.448488   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.448508   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.448840   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.448982   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.450800   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.451142   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.451600   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.451618   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.452380   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.452958   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.452997   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.453399   15867 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:40:49.454065   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39571
	I0919 18:40:49.454894   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.455082   15867 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:40:49.455106   15867 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:40:49.455124   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.455464   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.455479   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.457384   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.457897   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.457921   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.458152   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.458450   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.458478   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.458638   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.458832   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.459224   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.459333   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.459631   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0919 18:40:49.462188   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I0919 18:40:49.462853   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.463257   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.463271   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.463641   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.463799   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.465552   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.466014   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.466936   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.466950   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.467260   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.467389   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.467536   15867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:40:49.468671   15867 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:49.468694   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:40:49.468713   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.468931   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.469016   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
	I0919 18:40:49.469360   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.469776   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.469792   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.470151   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.470344   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.470543   15867 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:40:49.472123   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.472283   15867 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:40:49.472439   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.472835   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.472853   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.473110   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.473315   15867 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:40:49.473385   15867 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:40:49.473400   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:40:49.473416   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.473500   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.473620   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.473737   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.475721   15867 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:40:49.476334   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.476619   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.476643   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.476853   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.477020   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.477186   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.477304   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.477566   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0919 18:40:49.477724   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0919 18:40:49.478094   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.478183   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.478758   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.478774   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.479122   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.479310   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.479826   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32901
	I0919 18:40:49.479868   15867 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:40:49.480200   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.480216   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.480288   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.480636   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.480684   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41815
	I0919 18:40:49.480978   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.481029   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.481113   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.482407   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.482422   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.482772   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.483316   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.483360   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.483695   15867 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:40:49.483824   15867 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:40:49.484603   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40529
	I0919 18:40:49.485052   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.485278   15867 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:40:49.485302   15867 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:40:49.485322   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.485689   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.485706   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.486251   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.486631   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.486882   15867 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:40:49.488097   15867 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:40:49.488956   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.489548   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.489567   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.490041   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.490238   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.490453   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.490609   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.490629   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.490782   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.490820   15867 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:40:49.490826   15867 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0919 18:40:49.491136   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.491505   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.491625   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.491939   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.492374   15867 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0919 18:40:49.492396   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0919 18:40:49.492412   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.493385   15867 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:40:49.493436   15867 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:40:49.494617   15867 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:49.494633   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:40:49.494636   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
	I0919 18:40:49.494649   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.494707   15867 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:40:49.494714   15867 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:40:49.494729   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.496072   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.496144   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.496661   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.496678   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.496733   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.496746   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.497132   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.497666   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.497687   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.497899   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.498066   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.498228   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.498271   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.498447   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.498699   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.498716   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.498875   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.499003   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.499147   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.499260   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.500776   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.501099   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41573
	I0919 18:40:49.501058   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.501233   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.501459   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.501517   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.501568   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.501671   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.501793   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.501965   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.501979   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.502276   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.502465   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.503938   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.507700   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39235
	I0919 18:40:49.507737   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0919 18:40:49.507926   15867 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:40:49.508230   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.508239   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.508311   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0919 18:40:49.508743   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.508759   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.509113   15867 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:40:49.509132   15867 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:40:49.509150   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.509202   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.509216   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.509217   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.509767   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.509784   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.510405   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:49.510459   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:49.510865   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41543
	I0919 18:40:49.511275   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.511540   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.511857   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.511869   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.512512   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.512653   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.512679   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.513046   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.513103   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.513270   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.513443   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.513506   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0919 18:40:49.513523   15867 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:40:49.513783   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.513841   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.514000   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.514212   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.514421   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:49.514433   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:49.514719   15867 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:49.514740   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:40:49.514756   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.515772   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:49.515784   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:49.515788   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.515793   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:49.515800   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:49.515806   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:49.515865   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.515876   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.516139   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.516269   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.516283   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.516324   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.516345   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:49.516365   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:49.516373   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	W0919 18:40:49.516437   15867 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 18:40:49.516653   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.516856   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.518581   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.518923   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.519142   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.519161   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.519366   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.519541   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.519652   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.519751   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.520015   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.520100   15867 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:40:49.521330   15867 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:40:49.521352   15867 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:49.522958   15867 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:49.522981   15867 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:40:49.524353   15867 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:49.524372   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:40:49.524390   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.524477   15867 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:40:49.524491   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:40:49.524507   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.524728   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I0919 18:40:49.525482   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.525988   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.526004   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.526332   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.526493   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.528245   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.528981   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.529528   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.529551   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.529649   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.529822   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.529965   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.529990   15867 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:40:49.530082   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.530119   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.530486   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.530503   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.530779   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.530804   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0919 18:40:49.530927   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.531036   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.531137   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.531263   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.531362   15867 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:40:49.531371   15867 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:40:49.531381   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.531745   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.531759   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.532088   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.532280   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.533606   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.533646   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.534080   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.534096   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.534199   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.534297   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.534362   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.534424   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.535354   15867 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	W0919 18:40:49.535883   15867 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:42040->192.168.39.11:22: read: connection reset by peer
	I0919 18:40:49.535904   15867 retry.go:31] will retry after 225.814612ms: ssh: handshake failed: read tcp 192.168.39.1:42040->192.168.39.11:22: read: connection reset by peer
	I0919 18:40:49.536556   15867 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:40:49.536580   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:40:49.536596   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.539407   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.539826   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.539848   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.539998   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.540132   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.540261   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.540380   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.541399   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0919 18:40:49.541720   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:49.542034   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:49.542044   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:49.542249   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:49.542416   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:49.543427   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:49.543590   15867 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:49.543599   15867 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:40:49.543608   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:49.545725   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.545990   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:49.546001   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:49.546117   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:49.546234   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:49.546383   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:49.546462   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:49.925372   15867 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:40:49.925404   15867 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:40:49.944533   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:40:49.988288   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:50.023357   15867 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:40:50.023391   15867 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:40:50.063697   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:50.073258   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:40:50.076237   15867 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:40:50.076257   15867 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:40:50.097752   15867 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0919 18:40:50.097773   15867 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0919 18:40:50.099736   15867 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:40:50.099750   15867 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:40:50.106125   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:50.107475   15867 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:40:50.107489   15867 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:40:50.144340   15867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:40:50.144362   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:40:50.167299   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:50.222074   15867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:40:50.222096   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:40:50.230011   15867 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:40:50.230031   15867 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:40:50.248934   15867 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:40:50.248960   15867 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0919 18:40:50.279668   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:50.282254   15867 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:50.282271   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:40:50.294741   15867 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:40:50.294771   15867 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:40:50.329486   15867 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:40:50.329508   15867 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:40:50.356918   15867 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:40:50.356940   15867 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:40:50.369872   15867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:40:50.369897   15867 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:40:50.443582   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:40:50.493688   15867 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:40:50.493714   15867 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:40:50.548613   15867 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:40:50.548640   15867 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:40:50.584277   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:50.607485   15867 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:40:50.607511   15867 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:40:50.608747   15867 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:40:50.608769   15867 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:40:50.732036   15867 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:40:50.732059   15867 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:40:50.739930   15867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:50.739948   15867 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:40:50.790115   15867 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:40:50.790138   15867 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:40:50.861042   15867 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:40:50.861077   15867 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:40:50.872714   15867 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:50.872738   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:40:50.891521   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:50.989485   15867 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:50.989505   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:40:51.107515   15867 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:40:51.107540   15867 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:40:51.142752   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:51.232722   15867 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:40:51.232750   15867 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:40:51.288467   15867 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:40:51.288499   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:40:51.399536   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:51.505891   15867 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:40:51.505915   15867 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:40:51.512381   15867 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:40:51.512398   15867 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:40:51.760954   15867 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:51.760974   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:40:51.818920   15867 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:40:51.818942   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:40:51.944927   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:52.034725   15867 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:40:52.034752   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:40:52.301410   15867 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:52.301434   15867 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:40:52.572322   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:53.566416   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.621847342s)
	I0919 18:40:53.566465   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:53.566478   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:53.566725   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:53.566749   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:53.566766   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:53.566785   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:53.566797   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:53.567014   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:53.567107   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:53.567085   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:55.030583   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.042257846s)
	I0919 18:40:55.030592   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.966860579s)
	I0919 18:40:55.030632   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:55.030645   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:55.030656   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:55.030671   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:55.030890   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:55.030903   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:55.030911   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:55.030918   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:55.032283   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:55.032284   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:55.032298   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:55.032295   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:55.032298   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:55.032313   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:55.032323   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:55.032331   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:55.032527   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:55.032534   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:55.032539   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:55.199090   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:55.199117   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:55.199514   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:55.199622   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:55.199644   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:56.539276   15867 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:40:56.539315   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:56.542344   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:56.542711   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:56.542742   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:56.542896   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:56.543176   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:56.543334   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:56.543491   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:56.995586   15867 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:40:57.183968   15867 addons.go:234] Setting addon gcp-auth=true in "addons-140799"
	I0919 18:40:57.184020   15867 host.go:66] Checking if "addons-140799" exists ...
	I0919 18:40:57.184311   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:57.184350   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:57.200147   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0919 18:40:57.200645   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:57.201186   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:57.201211   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:57.201519   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:57.201950   15867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:40:57.201990   15867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:40:57.217152   15867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40685
	I0919 18:40:57.217643   15867 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:40:57.218092   15867 main.go:141] libmachine: Using API Version  1
	I0919 18:40:57.218110   15867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:40:57.218492   15867 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:40:57.218671   15867 main.go:141] libmachine: (addons-140799) Calling .GetState
	I0919 18:40:57.220507   15867 main.go:141] libmachine: (addons-140799) Calling .DriverName
	I0919 18:40:57.220737   15867 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:40:57.220758   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHHostname
	I0919 18:40:57.224174   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:57.224572   15867 main.go:141] libmachine: (addons-140799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:93:a9", ip: ""} in network mk-addons-140799: {Iface:virbr1 ExpiryTime:2024-09-19 19:40:18 +0000 UTC Type:0 Mac:52:54:00:f1:93:a9 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-140799 Clientid:01:52:54:00:f1:93:a9}
	I0919 18:40:57.224598   15867 main.go:141] libmachine: (addons-140799) DBG | domain addons-140799 has defined IP address 192.168.39.11 and MAC address 52:54:00:f1:93:a9 in network mk-addons-140799
	I0919 18:40:57.224749   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHPort
	I0919 18:40:57.224915   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHKeyPath
	I0919 18:40:57.225076   15867 main.go:141] libmachine: (addons-140799) Calling .GetSSHUsername
	I0919 18:40:57.225201   15867 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/addons-140799/id_rsa Username:docker}
	I0919 18:40:58.063587   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.990292872s)
	I0919 18:40:58.063641   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.063642   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.896322076s)
	I0919 18:40:58.063595   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.957442832s)
	I0919 18:40:58.063663   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.063681   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.063682   15867 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.84156957s)
	I0919 18:40:58.063652   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.063700   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.063707   15867 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.841600745s)
	I0919 18:40:58.063715   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.063726   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.784038001s)
	I0919 18:40:58.063746   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.063758   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.063695   15867 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 18:40:58.063824   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.47950775s)
	I0919 18:40:58.063852   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.063863   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.063980   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.1724321s)
	I0919 18:40:58.063998   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064007   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.063768   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.620153201s)
	I0919 18:40:58.064057   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.064073   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064076   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.921299195s)
	I0919 18:40:58.064082   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.064084   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064090   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064098   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.064100   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064127   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.064135   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.064143   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064149   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064154   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.064162   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.064170   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064176   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064195   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.064202   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.064209   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064215   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064220   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.664659043s)
	I0919 18:40:58.064252   15867 main.go:141] libmachine: Successfully made call to close driver server
	W0919 18:40:58.064251   15867 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:58.064274   15867 retry.go:31] will retry after 200.187179ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:58.064259   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.064292   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064295   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.119333634s)
	I0919 18:40:58.064316   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064326   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064299   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064422   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.064429   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.064436   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064442   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064698   15867 node_ready.go:35] waiting up to 6m0s for node "addons-140799" to be "Ready" ...
	I0919 18:40:58.064762   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.064789   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.064799   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.064806   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064813   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064821   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.064854   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.064856   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.064862   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.064871   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064874   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.064877   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064883   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.064883   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.064904   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.064928   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.064932   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.064935   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.064942   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.064949   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.064952   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.064959   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.064967   15867 addons.go:475] Verifying addon ingress=true in "addons-140799"
	I0919 18:40:58.064997   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.065004   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.068108   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.068147   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.068154   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.068163   15867 addons.go:475] Verifying addon metrics-server=true in "addons-140799"
	I0919 18:40:58.068290   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.068310   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.068316   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.068650   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.068671   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.068680   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.068810   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.068818   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.068688   15867 addons.go:475] Verifying addon registry=true in "addons-140799"
	I0919 18:40:58.068825   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.068831   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.068694   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.068712   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.068934   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.069021   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.068771   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.068788   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:58.068753   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.069986   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.069047   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.070022   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.069387   15867 out.go:177] * Verifying ingress addon...
	I0919 18:40:58.070657   15867 out.go:177] * Verifying registry addon...
	I0919 18:40:58.071409   15867 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-140799 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:40:58.072027   15867 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:40:58.072806   15867 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 18:40:58.092126   15867 node_ready.go:49] node "addons-140799" has status "Ready":"True"
	I0919 18:40:58.092160   15867 node_ready.go:38] duration metric: took 27.443886ms for node "addons-140799" to be "Ready" ...
	I0919 18:40:58.092173   15867 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:40:58.098159   15867 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:58.098187   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.108605   15867 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:40:58.108628   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.157818   15867 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4hxg6" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.178717   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:58.178737   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:58.179102   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:58.179126   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:58.188108   15867 pod_ready.go:93] pod "coredns-7c65d6cfc9-4hxg6" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:58.188135   15867 pod_ready.go:82] duration metric: took 30.287246ms for pod "coredns-7c65d6cfc9-4hxg6" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.188148   15867 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v9mp6" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.217264   15867 pod_ready.go:93] pod "coredns-7c65d6cfc9-v9mp6" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:58.217286   15867 pod_ready.go:82] duration metric: took 29.132021ms for pod "coredns-7c65d6cfc9-v9mp6" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.217295   15867 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-140799" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.265193   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:58.289280   15867 pod_ready.go:93] pod "etcd-addons-140799" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:58.289305   15867 pod_ready.go:82] duration metric: took 72.003002ms for pod "etcd-addons-140799" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.289317   15867 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-140799" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.312436   15867 pod_ready.go:93] pod "kube-apiserver-addons-140799" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:58.312458   15867 pod_ready.go:82] duration metric: took 23.133503ms for pod "kube-apiserver-addons-140799" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.312470   15867 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-140799" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.485714   15867 pod_ready.go:93] pod "kube-controller-manager-addons-140799" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:58.485736   15867 pod_ready.go:82] duration metric: took 173.258999ms for pod "kube-controller-manager-addons-140799" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.485751   15867 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-thqhm" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.569818   15867 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-140799" context rescaled to 1 replicas
	I0919 18:40:58.576916   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.577300   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.867590   15867 pod_ready.go:93] pod "kube-proxy-thqhm" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:58.867615   15867 pod_ready.go:82] duration metric: took 381.857372ms for pod "kube-proxy-thqhm" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:58.867629   15867 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-140799" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:59.339750   15867 pod_ready.go:93] pod "kube-scheduler-addons-140799" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:59.339782   15867 pod_ready.go:82] duration metric: took 472.144484ms for pod "kube-scheduler-addons-140799" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:59.339796   15867 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:59.345553   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.346940   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.582790   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.586028   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.965475   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.393096011s)
	I0919 18:40:59.965526   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:59.965539   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:59.965596   15867 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.744836499s)
	I0919 18:40:59.965854   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:40:59.965861   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:59.965873   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:59.965887   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:40:59.965898   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:40:59.966158   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:40:59.966187   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:40:59.966198   15867 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-140799"
	I0919 18:40:59.967669   15867 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:40:59.967677   15867 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:59.969431   15867 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:40:59.970290   15867 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:40:59.970742   15867 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:40:59.970757   15867 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:40:59.982069   15867 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:59.982100   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.144629   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.146459   15867 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:41:00.146475   15867 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:41:00.156774   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.241967   15867 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:41:00.241989   15867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:41:00.298865   15867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:41:00.345378   15867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.080137357s)
	I0919 18:41:00.345425   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:41:00.345437   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:41:00.345655   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:41:00.345715   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:41:00.345730   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:41:00.345745   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:41:00.345753   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:41:00.345948   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:41:00.345993   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:41:00.345976   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:41:00.474426   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.576815   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.578326   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.975069   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.091390   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.095687   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.283510   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:41:01.283538   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:41:01.283842   15867 main.go:141] libmachine: (addons-140799) DBG | Closing plugin on server side
	I0919 18:41:01.283876   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:41:01.283885   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:41:01.283895   15867 main.go:141] libmachine: Making call to close driver server
	I0919 18:41:01.283905   15867 main.go:141] libmachine: (addons-140799) Calling .Close
	I0919 18:41:01.284116   15867 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:41:01.284144   15867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:41:01.285476   15867 addons.go:475] Verifying addon gcp-auth=true in "addons-140799"
	I0919 18:41:01.287183   15867 out.go:177] * Verifying gcp-auth addon...
	I0919 18:41:01.289530   15867 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:41:01.309320   15867 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:41:01.309348   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.351477   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:01.478508   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.577048   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.578447   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.792824   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.977359   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.081650   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.083025   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.293012   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.476543   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.577550   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.578019   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.792907   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.975566   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.080875   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.082598   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.293563   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.475712   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.576256   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.578005   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.792951   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.846850   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:03.975355   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.075929   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.077632   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.293954   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.571675   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.576126   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.577147   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.793418   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.976261   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.076359   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.077057   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.292804   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.475184   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.577031   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.577976   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.794541   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.975900   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.077242   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.077584   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.293031   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.346626   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:06.476162   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.578080   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.578107   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.793824   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.975502   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.077590   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.077717   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.293116   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.475200   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.576168   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.576465   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.793450   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.975291   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.076160   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:08.076528   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.293945   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.475875   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.576560   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:08.576826   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.793522   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.846293   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:08.976172   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.075841   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.076367   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:09.295501   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.475463   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.576186   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.577020   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:09.792906   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.975501   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.079137   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:10.079386   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.293438   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.474983   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.576064   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.578110   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:10.792780   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.975644   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.076972   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:11.079144   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.292747   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.346620   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:11.475157   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.578134   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:11.578405   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.792639   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.975143   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.076260   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.077318   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:12.370811   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.475458   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.577595   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:12.578364   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.792756   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.975403   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.077134   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:13.077677   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.293679   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.477612   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.576556   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.577724   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:13.792921   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.945974   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:13.980547   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.076378   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:14.077337   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.293310   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.476304   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.577222   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:14.577496   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.793567   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.976086   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.076661   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:15.078249   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:15.292418   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.475917   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.576716   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:15.576869   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:15.793882   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.974901   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.075739   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:16.076605   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:16.294243   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.345153   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:16.475311   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.576160   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:16.577515   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:16.793526   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.975631   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.077450   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:17.077996   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:17.293770   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.475504   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.576757   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:17.576867   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:17.793429   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.974888   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.076236   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:18.077521   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:18.293339   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:18.349307   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:18.477601   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.576627   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:18.577025   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:18.792498   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:18.976208   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.076422   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:19.077646   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:19.294136   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.475627   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.576602   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:19.577436   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:19.793771   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.975552   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.077003   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:20.077428   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:20.293994   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:20.475810   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.576714   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:20.577046   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:20.793697   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:20.847640   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:20.975655   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.077365   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:21.078474   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:21.293184   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:21.474962   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.576730   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:21.578076   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:21.792789   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:21.975127   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.076490   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:22.077114   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:22.292980   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:22.475500   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.579182   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:22.579765   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:22.793232   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:22.975164   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:23.077187   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:23.077367   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:23.293594   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:23.348047   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:23.475433   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:23.576788   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:23.577028   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:23.792541   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:23.976160   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:24.077468   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:24.077578   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:24.294559   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.475273   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:24.577294   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:24.577527   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:24.793126   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.976519   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:25.076027   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:25.076611   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:25.294067   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:25.475416   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:25.577696   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:25.577845   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:25.794049   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:25.846927   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:25.974809   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:26.077044   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:26.077624   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:26.292779   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:26.475489   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:26.577422   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:26.577622   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:26.793383   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:26.974856   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:27.075285   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:27.076504   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:27.293634   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:27.475225   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:27.576597   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:27.576880   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:27.793476   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:27.974436   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:28.077091   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:28.077487   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:28.294011   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:28.346518   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:28.475184   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:28.576291   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:28.577484   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:28.793382   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:28.975192   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:29.076143   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:29.077020   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:29.293115   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.474775   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:29.578038   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:29.581640   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:29.792580   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.975567   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:30.076715   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:30.077273   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:30.293228   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:30.349387   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:30.475829   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:30.577245   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:30.577360   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:30.793280   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:30.975189   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:31.077612   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:31.077832   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:31.293718   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:31.475511   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:31.576611   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:31.577483   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:31.793202   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:31.975081   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:32.077239   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:32.078090   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:32.293464   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.474985   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:32.577591   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:32.577809   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:32.792804   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.846596   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:32.974674   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:33.077151   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:33.077502   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:33.292733   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:33.476071   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:33.577466   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:33.577618   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:33.793195   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:33.975512   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:34.076943   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:34.077355   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:34.293422   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.474713   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:34.577026   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:34.577413   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:34.793445   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.846913   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:34.975222   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:35.078611   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:35.078752   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:35.293741   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:35.475406   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:35.576268   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:35.576984   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:35.792821   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:35.974776   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:36.075486   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:36.076765   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:36.292858   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.475457   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:36.609734   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:36.609951   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:36.793525   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.975003   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:37.075600   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:37.077418   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:37.294171   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:37.351548   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:37.475232   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:37.576479   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:37.577335   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:37.793448   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:37.976021   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:38.076374   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:38.076937   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:38.293860   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:38.480179   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:38.577615   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:38.577738   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:38.794749   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:38.975279   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:39.076897   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:39.077420   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:39.293334   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:39.474887   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:39.578191   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:39.578506   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:39.793773   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:39.847119   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:39.974836   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:40.075524   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:40.077096   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:40.292965   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:40.474835   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:40.576752   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:40.579154   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:40.793179   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:40.977532   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:41.076585   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:41.078289   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:41.293079   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:41.477278   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:41.576739   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:41.577142   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:41.793130   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:41.974398   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:42.078965   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:42.081751   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:42.293762   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:42.346743   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:42.475532   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:42.577331   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:42.577743   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:42.795330   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:42.975616   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:43.077408   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:43.077684   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:43.292538   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:43.476106   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:43.579048   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:43.579193   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:43.793396   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:43.975383   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:44.076050   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:44.076666   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:44.294435   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:44.474624   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:44.580355   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:44.580669   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:44.794040   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:44.846297   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:44.975482   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:45.076145   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:45.076897   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:45.294043   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:45.474773   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:45.577223   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:45.577386   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:45.793374   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:45.975407   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:46.076949   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:46.077210   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:46.292812   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:46.474947   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:46.577206   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:46.577279   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:46.793552   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:46.847026   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:46.975238   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:47.076499   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:47.076780   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:47.293890   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.475194   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:47.576805   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:47.577331   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:47.793231   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.975380   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:48.076137   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:48.076390   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:48.292791   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:48.475556   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:48.576264   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:48.577322   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:48.795205   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:48.975202   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:49.079749   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:49.079923   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:49.293532   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.345479   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:49.475465   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:49.576641   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:49.577777   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:49.794255   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.974919   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:50.075386   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:50.076391   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:50.294104   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:50.475887   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:50.576515   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:50.576614   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:50.798876   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:50.975505   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:51.076308   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:51.078265   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:51.293256   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.345969   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:51.475210   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:51.576674   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:51.577562   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:51.792725   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.975974   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:52.079271   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:52.080996   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:52.293517   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.474718   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:52.576551   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:52.576588   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:52.794703   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.975661   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:53.076308   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:53.076731   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:53.293936   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.475141   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:53.575870   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:53.576882   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:53.792811   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.846985   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:53.974656   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:54.076342   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:54.076645   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:54.293493   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.474840   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:54.576517   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:54.579040   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:54.792689   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.975384   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:55.079544   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:55.079875   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:55.292877   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:55.474867   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:55.578092   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:55.578338   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:55.793044   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:55.975485   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:56.076122   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:56.077444   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:56.292811   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.347631   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:56.474630   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:56.576466   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:56.576574   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:56.793169   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.037801   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:57.077905   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:57.078040   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:57.292633   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.474952   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:57.576152   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:57.576567   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:57.794081   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.974707   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:58.075440   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:58.076892   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:58.294190   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.474940   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:58.576571   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:58.577136   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:58.795760   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.847047   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:58.975041   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:59.366531   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:59.366742   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.367205   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:59.475038   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:59.577251   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:59.578163   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:59.792764   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.975096   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:00.076497   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:00.076969   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:00.293333   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:00.476116   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:00.578294   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:00.578743   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:00.793109   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:00.850235   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:00.975085   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:01.076967   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:01.079411   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:01.293216   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.476706   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:01.576964   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:01.577560   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:01.796700   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.983302   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:02.076275   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:02.077250   15867 kapi.go:107] duration metric: took 1m4.00444177s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:42:02.293306   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:02.475459   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:02.575987   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:02.796493   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:02.975520   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:03.076551   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:03.293387   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.345968   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:03.475361   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:03.576534   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:03.794043   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.975564   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:04.076315   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:04.292705   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.475969   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:04.576197   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:04.792841   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.974772   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:05.076087   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:05.292838   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.350522   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:05.475199   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:05.577149   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:05.792813   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.975875   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:06.076055   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:06.293042   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.475756   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:06.582654   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:06.805331   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.975499   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:07.076174   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:07.293039   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.474461   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:07.576694   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:07.794084   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.846020   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:07.976667   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:08.079389   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:08.294419   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.475483   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:08.576467   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:08.808468   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.976665   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:09.077776   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:09.293286   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:09.475638   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:09.575974   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:09.793867   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.316385   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.316701   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:10.330528   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:10.331554   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:10.476323   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:10.577610   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:10.792752   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.975691   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:11.076182   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:11.292647   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.493935   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:11.606018   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:11.793948   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.975601   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:12.076540   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:12.293590   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.346684   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:12.474865   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:12.576969   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:12.793364   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.975930   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:13.076682   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:13.298546   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.474961   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:13.576493   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:13.801907   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.976186   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:14.079655   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:14.293646   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.474899   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:14.576985   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:14.794252   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.846448   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:14.975497   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:15.085117   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:15.294777   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.476744   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:15.578536   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:15.793408   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.975944   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:16.076033   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:16.293665   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:16.478426   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:16.580378   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:16.792704   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:16.847133   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:16.980235   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:17.075860   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:17.293579   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.483833   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:17.584214   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:17.793686   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.975904   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:18.484870   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:18.484914   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:18.486142   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:18.580305   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:18.800802   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:18.848325   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:18.976440   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:19.079249   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:19.293477   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.475744   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:19.586231   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:19.794654   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.974627   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:20.076887   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:20.293281   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.475498   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:20.575941   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:20.793753   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.975898   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:21.076690   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:21.294691   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.348710   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:21.475343   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:21.577756   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:21.793170   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.975070   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:22.076125   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:22.293574   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.475117   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:22.576911   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:22.793352   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.976222   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:23.081811   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:23.292837   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.475770   15867 kapi.go:107] duration metric: took 1m23.50547655s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:42:23.578321   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:23.793349   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.846067   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:24.076677   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:24.293142   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.576502   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:24.793027   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.076005   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:25.293276   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.578335   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:25.792712   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.846216   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:26.075852   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:26.293408   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:26.575679   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:26.793286   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.075458   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:27.292862   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.576120   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:27.792808   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.846953   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:28.078218   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:28.293277   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.576967   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:28.793840   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:29.079604   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:29.293240   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:29.576680   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:29.793243   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:30.076327   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:30.292798   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:30.348727   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:30.576892   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:30.794186   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:31.076639   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:31.293297   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:31.576795   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:31.793644   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.075924   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:32.294324   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.576588   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:32.793044   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.846455   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:33.076381   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:33.293471   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.576766   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:33.793670   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:34.077449   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:34.292837   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:34.575845   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:34.793676   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:34.847177   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:35.076524   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:35.293677   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:35.578680   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:35.794065   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:36.079368   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:36.293730   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:36.576129   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:36.793772   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:37.076798   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:37.293342   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:37.345769   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:37.576946   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:37.794405   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:38.076573   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:38.293010   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:38.577016   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:38.794634   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:39.076926   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:39.293485   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:39.576948   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:39.794478   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:39.846596   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:40.076593   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:40.293165   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:40.576539   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:40.793156   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:41.076042   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:41.292666   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:41.633216   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:41.793362   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:41.848078   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:42.076443   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:42.293270   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:42.576219   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:42.792945   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:43.076603   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:43.293689   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:43.577361   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:43.793253   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:44.076507   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:44.293042   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:44.346435   15867 pod_ready.go:103] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:44.576440   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:44.792914   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:45.075926   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:45.293845   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:45.577919   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:45.793841   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:45.847318   15867 pod_ready.go:93] pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace has status "Ready":"True"
	I0919 18:42:45.847339   15867 pod_ready.go:82] duration metric: took 1m46.507535249s for pod "metrics-server-84c5f94fbc-9m8bz" in "kube-system" namespace to be "Ready" ...
	I0919 18:42:45.847351   15867 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xgj4t" in "kube-system" namespace to be "Ready" ...
	I0919 18:42:45.852101   15867 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xgj4t" in "kube-system" namespace has status "Ready":"True"
	I0919 18:42:45.852120   15867 pod_ready.go:82] duration metric: took 4.762307ms for pod "nvidia-device-plugin-daemonset-xgj4t" in "kube-system" namespace to be "Ready" ...
	I0919 18:42:45.852147   15867 pod_ready.go:39] duration metric: took 1m47.759960385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:42:45.852171   15867 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:42:45.852229   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:42:45.852292   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:42:45.907563   15867 cri.go:89] found id: "5f2c874561a32b601623b48ce8847eee3829aae3fc336b18f26ec0294a4c7f28"
	I0919 18:42:45.907591   15867 cri.go:89] found id: ""
	I0919 18:42:45.907598   15867 logs.go:276] 1 containers: [5f2c874561a32b601623b48ce8847eee3829aae3fc336b18f26ec0294a4c7f28]
	I0919 18:42:45.907652   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:45.911861   15867 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:42:45.911918   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:42:45.951561   15867 cri.go:89] found id: "0d4a9b398a3b96b7b09fab7e15e3915d39f846ee441b48a61377d94903d2f2b7"
	I0919 18:42:45.951583   15867 cri.go:89] found id: ""
	I0919 18:42:45.951590   15867 logs.go:276] 1 containers: [0d4a9b398a3b96b7b09fab7e15e3915d39f846ee441b48a61377d94903d2f2b7]
	I0919 18:42:45.951642   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:45.955678   15867 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:42:45.955735   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:42:45.999860   15867 cri.go:89] found id: "ce19679764d4fdfdb47f7eaa8fdc7ff5e80aeca3e660f78020ed33ce3e4b9b95"
	I0919 18:42:45.999883   15867 cri.go:89] found id: ""
	I0919 18:42:45.999890   15867 logs.go:276] 1 containers: [ce19679764d4fdfdb47f7eaa8fdc7ff5e80aeca3e660f78020ed33ce3e4b9b95]
	I0919 18:42:45.999932   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:46.004486   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:42:46.004541   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:42:46.046045   15867 cri.go:89] found id: "b3d9b17dcea287caad7018b0722a77115784a252e0236379f77d18583a7c69be"
	I0919 18:42:46.046072   15867 cri.go:89] found id: ""
	I0919 18:42:46.046081   15867 logs.go:276] 1 containers: [b3d9b17dcea287caad7018b0722a77115784a252e0236379f77d18583a7c69be]
	I0919 18:42:46.046126   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:46.050443   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:42:46.050499   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:42:46.076533   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:46.109565   15867 cri.go:89] found id: "79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e"
	I0919 18:42:46.109589   15867 cri.go:89] found id: ""
	I0919 18:42:46.109601   15867 logs.go:276] 1 containers: [79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e]
	I0919 18:42:46.109653   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:46.113845   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:42:46.113915   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:42:46.164004   15867 cri.go:89] found id: "c2f439afae216c80e0454e0ca02cd8b0ae86bbab0d05319e08f3edfaa3afccde"
	I0919 18:42:46.164028   15867 cri.go:89] found id: ""
	I0919 18:42:46.164035   15867 logs.go:276] 1 containers: [c2f439afae216c80e0454e0ca02cd8b0ae86bbab0d05319e08f3edfaa3afccde]
	I0919 18:42:46.164089   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:46.168213   15867 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:42:46.168274   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:42:46.210149   15867 cri.go:89] found id: ""
	I0919 18:42:46.210184   15867 logs.go:276] 0 containers: []
	W0919 18:42:46.210193   15867 logs.go:278] No container was found matching "kindnet"
	I0919 18:42:46.210203   15867 logs.go:123] Gathering logs for kube-controller-manager [c2f439afae216c80e0454e0ca02cd8b0ae86bbab0d05319e08f3edfaa3afccde] ...
	I0919 18:42:46.210215   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2f439afae216c80e0454e0ca02cd8b0ae86bbab0d05319e08f3edfaa3afccde"
	I0919 18:42:46.281709   15867 logs.go:123] Gathering logs for container status ...
	I0919 18:42:46.281747   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:42:46.293716   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:46.334469   15867 logs.go:123] Gathering logs for kubelet ...
	I0919 18:42:46.334495   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:42:46.385773   15867 logs.go:138] Found kubelet problem: Sep 19 18:40:55 addons-140799 kubelet[1214]: W0919 18:40:55.192358    1214 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140799" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140799' and this object
	W0919 18:42:46.385959   15867 logs.go:138] Found kubelet problem: Sep 19 18:40:55 addons-140799 kubelet[1214]: E0919 18:40:55.192437    1214 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-140799\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-140799' and this object" logger="UnhandledError"
	I0919 18:42:46.418314   15867 logs.go:123] Gathering logs for dmesg ...
	I0919 18:42:46.418350   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:42:46.433728   15867 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:42:46.433758   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:42:46.583207   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:46.593707   15867 logs.go:123] Gathering logs for kube-apiserver [5f2c874561a32b601623b48ce8847eee3829aae3fc336b18f26ec0294a4c7f28] ...
	I0919 18:42:46.593741   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2c874561a32b601623b48ce8847eee3829aae3fc336b18f26ec0294a4c7f28"
	I0919 18:42:46.644185   15867 logs.go:123] Gathering logs for coredns [ce19679764d4fdfdb47f7eaa8fdc7ff5e80aeca3e660f78020ed33ce3e4b9b95] ...
	I0919 18:42:46.644218   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce19679764d4fdfdb47f7eaa8fdc7ff5e80aeca3e660f78020ed33ce3e4b9b95"
	I0919 18:42:46.682121   15867 logs.go:123] Gathering logs for kube-proxy [79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e] ...
	I0919 18:42:46.682151   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e"
	I0919 18:42:46.723259   15867 logs.go:123] Gathering logs for etcd [0d4a9b398a3b96b7b09fab7e15e3915d39f846ee441b48a61377d94903d2f2b7] ...
	I0919 18:42:46.723287   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d4a9b398a3b96b7b09fab7e15e3915d39f846ee441b48a61377d94903d2f2b7"
	I0919 18:42:46.777597   15867 logs.go:123] Gathering logs for kube-scheduler [b3d9b17dcea287caad7018b0722a77115784a252e0236379f77d18583a7c69be] ...
	I0919 18:42:46.777631   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3d9b17dcea287caad7018b0722a77115784a252e0236379f77d18583a7c69be"
	I0919 18:42:46.792807   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:46.822729   15867 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:42:46.822761   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:42:47.085048   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:47.293691   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:47.576825   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:47.684173   15867 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:47.684209   15867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:42:47.684288   15867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0919 18:42:47.684304   15867 out.go:270]   Sep 19 18:40:55 addons-140799 kubelet[1214]: W0919 18:40:55.192358    1214 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140799" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140799' and this object
	  Sep 19 18:40:55 addons-140799 kubelet[1214]: W0919 18:40:55.192358    1214 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140799" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140799' and this object
	W0919 18:42:47.684315   15867 out.go:270]   Sep 19 18:40:55 addons-140799 kubelet[1214]: E0919 18:40:55.192437    1214 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-140799\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-140799' and this object" logger="UnhandledError"
	  Sep 19 18:40:55 addons-140799 kubelet[1214]: E0919 18:40:55.192437    1214 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-140799\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-140799' and this object" logger="UnhandledError"
	I0919 18:42:47.684325   15867 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:47.684336   15867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:42:47.794196   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:48.076355   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:48.293172   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:48.576404   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:48.793505   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:49.076908   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:49.293620   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:49.577176   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:49.793742   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:50.076395   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:50.292844   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:50.577134   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:50.793872   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:51.076866   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:51.293436   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:51.576180   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:51.793893   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:52.076184   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:52.292799   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:52.577742   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:52.793442   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:53.077159   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:53.293814   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:53.577119   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:53.792988   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:54.075639   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:54.297258   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:54.576350   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:54.793187   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:55.077276   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:55.295154   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:55.576221   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:55.794382   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:56.076300   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:56.293835   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:56.576843   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:56.793883   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:57.076030   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:57.292773   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:57.576825   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:57.685375   15867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:42:57.705639   15867 api_server.go:72] duration metric: took 2m8.335578489s to wait for apiserver process to appear ...
	I0919 18:42:57.705674   15867 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:42:57.705707   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:42:57.705768   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:42:57.748236   15867 cri.go:89] found id: "5f2c874561a32b601623b48ce8847eee3829aae3fc336b18f26ec0294a4c7f28"
	I0919 18:42:57.748264   15867 cri.go:89] found id: ""
	I0919 18:42:57.748273   15867 logs.go:276] 1 containers: [5f2c874561a32b601623b48ce8847eee3829aae3fc336b18f26ec0294a4c7f28]
	I0919 18:42:57.748331   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:57.752613   15867 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:42:57.752665   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:42:57.793004   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:57.794342   15867 cri.go:89] found id: "0d4a9b398a3b96b7b09fab7e15e3915d39f846ee441b48a61377d94903d2f2b7"
	I0919 18:42:57.794361   15867 cri.go:89] found id: ""
	I0919 18:42:57.794370   15867 logs.go:276] 1 containers: [0d4a9b398a3b96b7b09fab7e15e3915d39f846ee441b48a61377d94903d2f2b7]
	I0919 18:42:57.794456   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:57.798691   15867 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:42:57.798740   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:42:57.840094   15867 cri.go:89] found id: "ce19679764d4fdfdb47f7eaa8fdc7ff5e80aeca3e660f78020ed33ce3e4b9b95"
	I0919 18:42:57.840123   15867 cri.go:89] found id: ""
	I0919 18:42:57.840133   15867 logs.go:276] 1 containers: [ce19679764d4fdfdb47f7eaa8fdc7ff5e80aeca3e660f78020ed33ce3e4b9b95]
	I0919 18:42:57.840191   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:57.844584   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:42:57.844659   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:42:57.888700   15867 cri.go:89] found id: "b3d9b17dcea287caad7018b0722a77115784a252e0236379f77d18583a7c69be"
	I0919 18:42:57.888728   15867 cri.go:89] found id: ""
	I0919 18:42:57.888737   15867 logs.go:276] 1 containers: [b3d9b17dcea287caad7018b0722a77115784a252e0236379f77d18583a7c69be]
	I0919 18:42:57.888795   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:57.893561   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:42:57.893621   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:42:57.942880   15867 cri.go:89] found id: "79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e"
	I0919 18:42:57.942908   15867 cri.go:89] found id: ""
	I0919 18:42:57.942918   15867 logs.go:276] 1 containers: [79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e]
	I0919 18:42:57.942974   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:57.947883   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:42:57.947948   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:42:57.990613   15867 cri.go:89] found id: "c2f439afae216c80e0454e0ca02cd8b0ae86bbab0d05319e08f3edfaa3afccde"
	I0919 18:42:57.990633   15867 cri.go:89] found id: ""
	I0919 18:42:57.990640   15867 logs.go:276] 1 containers: [c2f439afae216c80e0454e0ca02cd8b0ae86bbab0d05319e08f3edfaa3afccde]
	I0919 18:42:57.990692   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:42:57.994898   15867 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:42:57.994948   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:42:58.040203   15867 cri.go:89] found id: ""
	I0919 18:42:58.040225   15867 logs.go:276] 0 containers: []
	W0919 18:42:58.040234   15867 logs.go:278] No container was found matching "kindnet"
	I0919 18:42:58.040243   15867 logs.go:123] Gathering logs for etcd [0d4a9b398a3b96b7b09fab7e15e3915d39f846ee441b48a61377d94903d2f2b7] ...
	I0919 18:42:58.040258   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d4a9b398a3b96b7b09fab7e15e3915d39f846ee441b48a61377d94903d2f2b7"
	I0919 18:42:58.075507   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:58.106884   15867 logs.go:123] Gathering logs for coredns [ce19679764d4fdfdb47f7eaa8fdc7ff5e80aeca3e660f78020ed33ce3e4b9b95] ...
	I0919 18:42:58.106915   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce19679764d4fdfdb47f7eaa8fdc7ff5e80aeca3e660f78020ed33ce3e4b9b95"
	I0919 18:42:58.142235   15867 logs.go:123] Gathering logs for kube-proxy [79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e] ...
	I0919 18:42:58.142268   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e"
	I0919 18:42:58.191314   15867 logs.go:123] Gathering logs for kube-controller-manager [c2f439afae216c80e0454e0ca02cd8b0ae86bbab0d05319e08f3edfaa3afccde] ...
	I0919 18:42:58.191345   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2f439afae216c80e0454e0ca02cd8b0ae86bbab0d05319e08f3edfaa3afccde"
	I0919 18:42:58.248073   15867 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:42:58.248105   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:42:58.292393   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:58.576727   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:58.793777   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:59.076695   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:59.232358   15867 logs.go:123] Gathering logs for dmesg ...
	I0919 18:42:59.232400   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:42:59.247164   15867 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:42:59.247198   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:42:59.292947   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:59.370839   15867 logs.go:123] Gathering logs for kube-scheduler [b3d9b17dcea287caad7018b0722a77115784a252e0236379f77d18583a7c69be] ...
	I0919 18:42:59.370868   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3d9b17dcea287caad7018b0722a77115784a252e0236379f77d18583a7c69be"
	I0919 18:42:59.415007   15867 logs.go:123] Gathering logs for container status ...
	I0919 18:42:59.415046   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:42:59.466287   15867 logs.go:123] Gathering logs for kubelet ...
	I0919 18:42:59.466323   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:42:59.519571   15867 logs.go:138] Found kubelet problem: Sep 19 18:40:55 addons-140799 kubelet[1214]: W0919 18:40:55.192358    1214 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140799" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140799' and this object
	W0919 18:42:59.519792   15867 logs.go:138] Found kubelet problem: Sep 19 18:40:55 addons-140799 kubelet[1214]: E0919 18:40:55.192437    1214 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-140799\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-140799' and this object" logger="UnhandledError"
	I0919 18:42:59.553794   15867 logs.go:123] Gathering logs for kube-apiserver [5f2c874561a32b601623b48ce8847eee3829aae3fc336b18f26ec0294a4c7f28] ...
	I0919 18:42:59.553827   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f2c874561a32b601623b48ce8847eee3829aae3fc336b18f26ec0294a4c7f28"
	I0919 18:42:59.577120   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:59.614026   15867 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:59.614051   15867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:42:59.614099   15867 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0919 18:42:59.614109   15867 out.go:270]   Sep 19 18:40:55 addons-140799 kubelet[1214]: W0919 18:40:55.192358    1214 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140799" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140799' and this object
	  Sep 19 18:40:55 addons-140799 kubelet[1214]: W0919 18:40:55.192358    1214 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140799" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140799' and this object
	W0919 18:42:59.614120   15867 out.go:270]   Sep 19 18:40:55 addons-140799 kubelet[1214]: E0919 18:40:55.192437    1214 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-140799\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-140799' and this object" logger="UnhandledError"
	  Sep 19 18:40:55 addons-140799 kubelet[1214]: E0919 18:40:55.192437    1214 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-140799\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-140799' and this object" logger="UnhandledError"
	I0919 18:42:59.614129   15867 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:59.614137   15867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:42:59.793920   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:00.076245   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:00.292995   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:00.576516   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:00.793839   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:01.076465   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:01.293224   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:01.576222   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:01.793139   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:02.075943   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:02.292911   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:02.576808   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:02.793760   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:03.076157   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:03.293851   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:03.577250   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:03.793325   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:04.075884   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:04.293786   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:04.576434   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:04.794506   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:05.076349   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:05.293376   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:05.577352   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:05.794547   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:06.076208   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:06.292970   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:06.577344   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:06.793052   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:07.076550   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:07.293932   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:07.577414   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:07.793168   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:08.076271   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:08.293885   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:08.577007   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:08.793142   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:09.077247   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:09.293162   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:09.576573   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:09.614864   15867 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0919 18:43:09.619400   15867 api_server.go:279] https://192.168.39.11:8443/healthz returned 200:
	ok
	I0919 18:43:09.620315   15867 api_server.go:141] control plane version: v1.31.1
	I0919 18:43:09.620343   15867 api_server.go:131] duration metric: took 11.914661976s to wait for apiserver health ...
	I0919 18:43:09.620351   15867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:43:09.620368   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:43:09.620414   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:43:09.663488   15867 cri.go:89] found id: "5f2c874561a32b601623b48ce8847eee3829aae3fc336b18f26ec0294a4c7f28"
	I0919 18:43:09.663514   15867 cri.go:89] found id: ""
	I0919 18:43:09.663525   15867 logs.go:276] 1 containers: [5f2c874561a32b601623b48ce8847eee3829aae3fc336b18f26ec0294a4c7f28]
	I0919 18:43:09.663589   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:43:09.667873   15867 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:43:09.667933   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:43:09.706089   15867 cri.go:89] found id: "0d4a9b398a3b96b7b09fab7e15e3915d39f846ee441b48a61377d94903d2f2b7"
	I0919 18:43:09.706110   15867 cri.go:89] found id: ""
	I0919 18:43:09.706121   15867 logs.go:276] 1 containers: [0d4a9b398a3b96b7b09fab7e15e3915d39f846ee441b48a61377d94903d2f2b7]
	I0919 18:43:09.706220   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:43:09.711092   15867 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:43:09.711164   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:43:09.749325   15867 cri.go:89] found id: "ce19679764d4fdfdb47f7eaa8fdc7ff5e80aeca3e660f78020ed33ce3e4b9b95"
	I0919 18:43:09.749350   15867 cri.go:89] found id: ""
	I0919 18:43:09.749358   15867 logs.go:276] 1 containers: [ce19679764d4fdfdb47f7eaa8fdc7ff5e80aeca3e660f78020ed33ce3e4b9b95]
	I0919 18:43:09.749417   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:43:09.754169   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:43:09.754230   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:43:09.793443   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:09.805330   15867 cri.go:89] found id: "b3d9b17dcea287caad7018b0722a77115784a252e0236379f77d18583a7c69be"
	I0919 18:43:09.805352   15867 cri.go:89] found id: ""
	I0919 18:43:09.805370   15867 logs.go:276] 1 containers: [b3d9b17dcea287caad7018b0722a77115784a252e0236379f77d18583a7c69be]
	I0919 18:43:09.805429   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:43:09.809765   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:43:09.809834   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:43:09.857053   15867 cri.go:89] found id: "79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e"
	I0919 18:43:09.857088   15867 cri.go:89] found id: ""
	I0919 18:43:09.857099   15867 logs.go:276] 1 containers: [79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e]
	I0919 18:43:09.857167   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:43:09.861900   15867 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:43:09.861966   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:43:09.901790   15867 cri.go:89] found id: "c2f439afae216c80e0454e0ca02cd8b0ae86bbab0d05319e08f3edfaa3afccde"
	I0919 18:43:09.901819   15867 cri.go:89] found id: ""
	I0919 18:43:09.901829   15867 logs.go:276] 1 containers: [c2f439afae216c80e0454e0ca02cd8b0ae86bbab0d05319e08f3edfaa3afccde]
	I0919 18:43:09.901883   15867 ssh_runner.go:195] Run: which crictl
	I0919 18:43:09.906675   15867 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:43:09.906756   15867 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:43:09.944219   15867 cri.go:89] found id: ""
	I0919 18:43:09.944250   15867 logs.go:276] 0 containers: []
	W0919 18:43:09.944260   15867 logs.go:278] No container was found matching "kindnet"
	I0919 18:43:09.944272   15867 logs.go:123] Gathering logs for kubelet ...
	I0919 18:43:09.944286   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:43:09.994920   15867 logs.go:138] Found kubelet problem: Sep 19 18:40:55 addons-140799 kubelet[1214]: W0919 18:40:55.192358    1214 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-140799" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-140799' and this object
	W0919 18:43:09.995134   15867 logs.go:138] Found kubelet problem: Sep 19 18:40:55 addons-140799 kubelet[1214]: E0919 18:40:55.192437    1214 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-140799\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-140799' and this object" logger="UnhandledError"
	I0919 18:43:10.030931   15867 logs.go:123] Gathering logs for kube-proxy [79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e] ...
	I0919 18:43:10.030970   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79709f42add00055ecd021ec62e5dcdeb5a7ea8f41964a4f1a6494d911a5656e"
	I0919 18:43:10.073713   15867 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:43:10.073748   15867 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:43:10.076288   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:10.292730   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:10.576982   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:10.793603   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:11.076054   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:11.294643   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:11.576836   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:11.793684   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:12.077226   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:12.297831   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:12.582259   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:12.795258   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:13.088819   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:13.294002   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:13.576894   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:13.793399   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:14.076368   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:14.292887   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:14.576433   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:14.793553   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:15.079332   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:15.293460   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:15.576989   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:15.793786   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:16.079548   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:16.293872   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:16.577821   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:16.793325   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:17.076195   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:17.292338   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:17.576308   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:17.792577   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:18.077697   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:18.293521   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:18.576241   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:18.819656   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:19.076874   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:19.293810   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:19.576719   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:19.793386   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:20.076691   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:20.293732   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:20.578890   15867 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:43:20.794377   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:21.075808   15867 kapi.go:107] duration metric: took 2m23.003779007s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:43:21.294982   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:21.793788   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:22.293684   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:22.794181   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:23.293765   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:23.793158   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:24.294225   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:24.794328   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:25.294249   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:25.794704   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:26.295119   15867 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:43:26.795087   15867 kapi.go:107] duration metric: took 2m25.505556066s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:43:26.796568   15867 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-140799 cluster.
	I0919 18:43:26.797718   15867 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:43:26.799042   15867 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:43:26.800401   15867 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, default-storageclass, nvidia-device-plugin, metrics-server, helm-tiller, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0919 18:43:26.801720   15867 addons.go:510] duration metric: took 2m37.431621239s for enable addons: enabled=[ingress-dns storage-provisioner default-storageclass nvidia-device-plugin metrics-server helm-tiller cloud-spanner inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-140799 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 node stop m02 -v=7 --alsologtostderr
E0919 19:29:40.311318   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:30:21.273235   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076992 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.466171552s)

                                                
                                                
-- stdout --
	* Stopping node "ha-076992-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:29:24.229466   34018 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:29:24.229604   34018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:29:24.229613   34018 out.go:358] Setting ErrFile to fd 2...
	I0919 19:29:24.229618   34018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:29:24.229797   34018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:29:24.230079   34018 mustload.go:65] Loading cluster: ha-076992
	I0919 19:29:24.230509   34018 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:29:24.230525   34018 stop.go:39] StopHost: ha-076992-m02
	I0919 19:29:24.230908   34018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:29:24.230951   34018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:29:24.246984   34018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43975
	I0919 19:29:24.247426   34018 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:29:24.247942   34018 main.go:141] libmachine: Using API Version  1
	I0919 19:29:24.247966   34018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:29:24.248281   34018 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:29:24.250961   34018 out.go:177] * Stopping node "ha-076992-m02"  ...
	I0919 19:29:24.252482   34018 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0919 19:29:24.252507   34018 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:29:24.252725   34018 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0919 19:29:24.252765   34018 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:29:24.255646   34018 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:29:24.256119   34018 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:29:24.256152   34018 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:29:24.256276   34018 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:29:24.256416   34018 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:29:24.256569   34018 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:29:24.256695   34018 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:29:24.342116   34018 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0919 19:29:24.397619   34018 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0919 19:29:24.454840   34018 main.go:141] libmachine: Stopping "ha-076992-m02"...
	I0919 19:29:24.454870   34018 main.go:141] libmachine: (ha-076992-m02) Calling .GetState
	I0919 19:29:24.456423   34018 main.go:141] libmachine: (ha-076992-m02) Calling .Stop
	I0919 19:29:24.460245   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 0/120
	I0919 19:29:25.461739   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 1/120
	I0919 19:29:26.463603   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 2/120
	I0919 19:29:27.464989   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 3/120
	I0919 19:29:28.466346   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 4/120
	I0919 19:29:29.468235   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 5/120
	I0919 19:29:30.469436   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 6/120
	I0919 19:29:31.471517   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 7/120
	I0919 19:29:32.472736   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 8/120
	I0919 19:29:33.474245   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 9/120
	I0919 19:29:34.476409   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 10/120
	I0919 19:29:35.477903   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 11/120
	I0919 19:29:36.479580   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 12/120
	I0919 19:29:37.481375   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 13/120
	I0919 19:29:38.483549   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 14/120
	I0919 19:29:39.485283   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 15/120
	I0919 19:29:40.486785   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 16/120
	I0919 19:29:41.488101   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 17/120
	I0919 19:29:42.489910   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 18/120
	I0919 19:29:43.491226   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 19/120
	I0919 19:29:44.493384   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 20/120
	I0919 19:29:45.495428   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 21/120
	I0919 19:29:46.496779   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 22/120
	I0919 19:29:47.498181   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 23/120
	I0919 19:29:48.499603   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 24/120
	I0919 19:29:49.501630   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 25/120
	I0919 19:29:50.503547   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 26/120
	I0919 19:29:51.504797   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 27/120
	I0919 19:29:52.506116   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 28/120
	I0919 19:29:53.507421   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 29/120
	I0919 19:29:54.509900   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 30/120
	I0919 19:29:55.511275   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 31/120
	I0919 19:29:56.512864   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 32/120
	I0919 19:29:57.514209   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 33/120
	I0919 19:29:58.515585   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 34/120
	I0919 19:29:59.517443   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 35/120
	I0919 19:30:00.518690   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 36/120
	I0919 19:30:01.520085   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 37/120
	I0919 19:30:02.522053   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 38/120
	I0919 19:30:03.523498   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 39/120
	I0919 19:30:04.525751   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 40/120
	I0919 19:30:05.527077   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 41/120
	I0919 19:30:06.528680   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 42/120
	I0919 19:30:07.530077   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 43/120
	I0919 19:30:08.531415   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 44/120
	I0919 19:30:09.532961   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 45/120
	I0919 19:30:10.534248   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 46/120
	I0919 19:30:11.535666   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 47/120
	I0919 19:30:12.536888   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 48/120
	I0919 19:30:13.538093   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 49/120
	I0919 19:30:14.540470   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 50/120
	I0919 19:30:15.541923   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 51/120
	I0919 19:30:16.543714   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 52/120
	I0919 19:30:17.545043   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 53/120
	I0919 19:30:18.547334   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 54/120
	I0919 19:30:19.549298   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 55/120
	I0919 19:30:20.551261   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 56/120
	I0919 19:30:21.552502   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 57/120
	I0919 19:30:22.553996   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 58/120
	I0919 19:30:23.555334   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 59/120
	I0919 19:30:24.557223   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 60/120
	I0919 19:30:25.559620   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 61/120
	I0919 19:30:26.561131   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 62/120
	I0919 19:30:27.563484   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 63/120
	I0919 19:30:28.564977   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 64/120
	I0919 19:30:29.566306   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 65/120
	I0919 19:30:30.567925   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 66/120
	I0919 19:30:31.569262   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 67/120
	I0919 19:30:32.571646   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 68/120
	I0919 19:30:33.573034   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 69/120
	I0919 19:30:34.575192   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 70/120
	I0919 19:30:35.577008   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 71/120
	I0919 19:30:36.578245   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 72/120
	I0919 19:30:37.579522   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 73/120
	I0919 19:30:38.580777   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 74/120
	I0919 19:30:39.582086   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 75/120
	I0919 19:30:40.583648   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 76/120
	I0919 19:30:41.585038   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 77/120
	I0919 19:30:42.586433   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 78/120
	I0919 19:30:43.587633   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 79/120
	I0919 19:30:44.589823   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 80/120
	I0919 19:30:45.591722   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 81/120
	I0919 19:30:46.593145   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 82/120
	I0919 19:30:47.594466   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 83/120
	I0919 19:30:48.595758   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 84/120
	I0919 19:30:49.597741   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 85/120
	I0919 19:30:50.599234   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 86/120
	I0919 19:30:51.600817   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 87/120
	I0919 19:30:52.602755   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 88/120
	I0919 19:30:53.603888   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 89/120
	I0919 19:30:54.605612   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 90/120
	I0919 19:30:55.607584   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 91/120
	I0919 19:30:56.608999   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 92/120
	I0919 19:30:57.611180   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 93/120
	I0919 19:30:58.612470   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 94/120
	I0919 19:30:59.614583   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 95/120
	I0919 19:31:00.616003   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 96/120
	I0919 19:31:01.617440   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 97/120
	I0919 19:31:02.618857   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 98/120
	I0919 19:31:03.620580   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 99/120
	I0919 19:31:04.622682   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 100/120
	I0919 19:31:05.623988   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 101/120
	I0919 19:31:06.625434   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 102/120
	I0919 19:31:07.626901   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 103/120
	I0919 19:31:08.628196   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 104/120
	I0919 19:31:09.629858   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 105/120
	I0919 19:31:10.631252   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 106/120
	I0919 19:31:11.632547   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 107/120
	I0919 19:31:12.634111   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 108/120
	I0919 19:31:13.635568   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 109/120
	I0919 19:31:14.637634   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 110/120
	I0919 19:31:15.639566   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 111/120
	I0919 19:31:16.641286   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 112/120
	I0919 19:31:17.642737   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 113/120
	I0919 19:31:18.644519   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 114/120
	I0919 19:31:19.646892   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 115/120
	I0919 19:31:20.648059   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 116/120
	I0919 19:31:21.649665   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 117/120
	I0919 19:31:22.651443   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 118/120
	I0919 19:31:23.653038   34018 main.go:141] libmachine: (ha-076992-m02) Waiting for machine to stop 119/120
	I0919 19:31:24.654149   34018 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0919 19:31:24.654295   34018 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-076992 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr
E0919 19:31:43.198289   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr: (18.867334216s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-076992 -n ha-076992
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-076992 logs -n 25: (1.428478404s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992:/home/docker/cp-test_ha-076992-m03_ha-076992.txt                       |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992 sudo cat                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992.txt                                 |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m02:/home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m04 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp testdata/cp-test.txt                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992:/home/docker/cp-test_ha-076992-m04_ha-076992.txt                       |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992 sudo cat                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992.txt                                 |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m02:/home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03:/home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m03 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-076992 node stop m02 -v=7                                                     | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 19:24:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 19:24:50.546945   29946 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:24:50.547063   29946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:50.547072   29946 out.go:358] Setting ErrFile to fd 2...
	I0919 19:24:50.547076   29946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:50.547225   29946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:24:50.547763   29946 out.go:352] Setting JSON to false
	I0919 19:24:50.548588   29946 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4035,"bootTime":1726769856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:24:50.548689   29946 start.go:139] virtualization: kvm guest
	I0919 19:24:50.550911   29946 out.go:177] * [ha-076992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:24:50.552265   29946 notify.go:220] Checking for updates...
	I0919 19:24:50.552285   29946 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:24:50.553819   29946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:24:50.555250   29946 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:24:50.556710   29946 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:50.557978   29946 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:24:50.559199   29946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:24:50.560718   29946 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:24:50.593907   29946 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 19:24:50.595154   29946 start.go:297] selected driver: kvm2
	I0919 19:24:50.595169   29946 start.go:901] validating driver "kvm2" against <nil>
	I0919 19:24:50.595180   29946 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:24:50.595817   29946 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:24:50.595876   29946 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 19:24:50.610266   29946 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 19:24:50.610336   29946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 19:24:50.610614   29946 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:24:50.610657   29946 cni.go:84] Creating CNI manager for ""
	I0919 19:24:50.610702   29946 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 19:24:50.610710   29946 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 19:24:50.610777   29946 start.go:340] cluster config:
	{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0919 19:24:50.610877   29946 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:24:50.612616   29946 out.go:177] * Starting "ha-076992" primary control-plane node in "ha-076992" cluster
	I0919 19:24:50.613886   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:24:50.613919   29946 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 19:24:50.613930   29946 cache.go:56] Caching tarball of preloaded images
	I0919 19:24:50.614002   29946 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:24:50.614013   29946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:24:50.614333   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:24:50.614355   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json: {Name:mk8d4afdb9fa7e7321b4f997efa478fa6418ce40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:24:50.614511   29946 start.go:360] acquireMachinesLock for ha-076992: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:24:50.614545   29946 start.go:364] duration metric: took 19.183µs to acquireMachinesLock for "ha-076992"
	I0919 19:24:50.614566   29946 start.go:93] Provisioning new machine with config: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:24:50.614666   29946 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 19:24:50.616202   29946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 19:24:50.616319   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:24:50.616360   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:24:50.630334   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39147
	I0919 19:24:50.630824   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:24:50.631360   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:24:50.631387   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:24:50.631735   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:24:50.631911   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:24:50.632045   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:24:50.632261   29946 start.go:159] libmachine.API.Create for "ha-076992" (driver="kvm2")
	I0919 19:24:50.632292   29946 client.go:168] LocalClient.Create starting
	I0919 19:24:50.632325   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem
	I0919 19:24:50.632369   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:24:50.632396   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:24:50.632469   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem
	I0919 19:24:50.632497   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:24:50.632517   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:24:50.632546   29946 main.go:141] libmachine: Running pre-create checks...
	I0919 19:24:50.632558   29946 main.go:141] libmachine: (ha-076992) Calling .PreCreateCheck
	I0919 19:24:50.632876   29946 main.go:141] libmachine: (ha-076992) Calling .GetConfigRaw
	I0919 19:24:50.633289   29946 main.go:141] libmachine: Creating machine...
	I0919 19:24:50.633304   29946 main.go:141] libmachine: (ha-076992) Calling .Create
	I0919 19:24:50.633442   29946 main.go:141] libmachine: (ha-076992) Creating KVM machine...
	I0919 19:24:50.634573   29946 main.go:141] libmachine: (ha-076992) DBG | found existing default KVM network
	I0919 19:24:50.635280   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:50.635109   29969 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0919 19:24:50.635311   29946 main.go:141] libmachine: (ha-076992) DBG | created network xml: 
	I0919 19:24:50.635327   29946 main.go:141] libmachine: (ha-076992) DBG | <network>
	I0919 19:24:50.635345   29946 main.go:141] libmachine: (ha-076992) DBG |   <name>mk-ha-076992</name>
	I0919 19:24:50.635359   29946 main.go:141] libmachine: (ha-076992) DBG |   <dns enable='no'/>
	I0919 19:24:50.635371   29946 main.go:141] libmachine: (ha-076992) DBG |   
	I0919 19:24:50.635380   29946 main.go:141] libmachine: (ha-076992) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0919 19:24:50.635421   29946 main.go:141] libmachine: (ha-076992) DBG |     <dhcp>
	I0919 19:24:50.635435   29946 main.go:141] libmachine: (ha-076992) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0919 19:24:50.635458   29946 main.go:141] libmachine: (ha-076992) DBG |     </dhcp>
	I0919 19:24:50.635488   29946 main.go:141] libmachine: (ha-076992) DBG |   </ip>
	I0919 19:24:50.635501   29946 main.go:141] libmachine: (ha-076992) DBG |   
	I0919 19:24:50.635515   29946 main.go:141] libmachine: (ha-076992) DBG | </network>
	I0919 19:24:50.635528   29946 main.go:141] libmachine: (ha-076992) DBG | 
	I0919 19:24:50.640246   29946 main.go:141] libmachine: (ha-076992) DBG | trying to create private KVM network mk-ha-076992 192.168.39.0/24...
	I0919 19:24:50.704681   29946 main.go:141] libmachine: (ha-076992) DBG | private KVM network mk-ha-076992 192.168.39.0/24 created
	I0919 19:24:50.704725   29946 main.go:141] libmachine: (ha-076992) Setting up store path in /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992 ...
	I0919 19:24:50.704741   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:50.704651   29969 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:50.704763   29946 main.go:141] libmachine: (ha-076992) Building disk image from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 19:24:50.704783   29946 main.go:141] libmachine: (ha-076992) Downloading /home/jenkins/minikube-integration/19664-7917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0919 19:24:50.947095   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:50.946892   29969 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa...
	I0919 19:24:51.013606   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:51.013482   29969 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/ha-076992.rawdisk...
	I0919 19:24:51.013627   29946 main.go:141] libmachine: (ha-076992) DBG | Writing magic tar header
	I0919 19:24:51.013637   29946 main.go:141] libmachine: (ha-076992) DBG | Writing SSH key tar header
	I0919 19:24:51.013650   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:51.013598   29969 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992 ...
	I0919 19:24:51.013757   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992
	I0919 19:24:51.013788   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992 (perms=drwx------)
	I0919 19:24:51.013802   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines
	I0919 19:24:51.013816   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:51.013823   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917
	I0919 19:24:51.013833   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines (perms=drwxr-xr-x)
	I0919 19:24:51.013844   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 19:24:51.013855   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube (perms=drwxr-xr-x)
	I0919 19:24:51.013870   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917 (perms=drwxrwxr-x)
	I0919 19:24:51.013881   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 19:24:51.013890   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 19:24:51.013899   29946 main.go:141] libmachine: (ha-076992) Creating domain...
	I0919 19:24:51.013908   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins
	I0919 19:24:51.013915   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home
	I0919 19:24:51.013924   29946 main.go:141] libmachine: (ha-076992) DBG | Skipping /home - not owner
	I0919 19:24:51.014892   29946 main.go:141] libmachine: (ha-076992) define libvirt domain using xml: 
	I0919 19:24:51.014904   29946 main.go:141] libmachine: (ha-076992) <domain type='kvm'>
	I0919 19:24:51.014910   29946 main.go:141] libmachine: (ha-076992)   <name>ha-076992</name>
	I0919 19:24:51.014944   29946 main.go:141] libmachine: (ha-076992)   <memory unit='MiB'>2200</memory>
	I0919 19:24:51.014958   29946 main.go:141] libmachine: (ha-076992)   <vcpu>2</vcpu>
	I0919 19:24:51.014968   29946 main.go:141] libmachine: (ha-076992)   <features>
	I0919 19:24:51.014975   29946 main.go:141] libmachine: (ha-076992)     <acpi/>
	I0919 19:24:51.014982   29946 main.go:141] libmachine: (ha-076992)     <apic/>
	I0919 19:24:51.015012   29946 main.go:141] libmachine: (ha-076992)     <pae/>
	I0919 19:24:51.015033   29946 main.go:141] libmachine: (ha-076992)     
	I0919 19:24:51.015043   29946 main.go:141] libmachine: (ha-076992)   </features>
	I0919 19:24:51.015052   29946 main.go:141] libmachine: (ha-076992)   <cpu mode='host-passthrough'>
	I0919 19:24:51.015061   29946 main.go:141] libmachine: (ha-076992)   
	I0919 19:24:51.015070   29946 main.go:141] libmachine: (ha-076992)   </cpu>
	I0919 19:24:51.015078   29946 main.go:141] libmachine: (ha-076992)   <os>
	I0919 19:24:51.015088   29946 main.go:141] libmachine: (ha-076992)     <type>hvm</type>
	I0919 19:24:51.015098   29946 main.go:141] libmachine: (ha-076992)     <boot dev='cdrom'/>
	I0919 19:24:51.015117   29946 main.go:141] libmachine: (ha-076992)     <boot dev='hd'/>
	I0919 19:24:51.015130   29946 main.go:141] libmachine: (ha-076992)     <bootmenu enable='no'/>
	I0919 19:24:51.015139   29946 main.go:141] libmachine: (ha-076992)   </os>
	I0919 19:24:51.015171   29946 main.go:141] libmachine: (ha-076992)   <devices>
	I0919 19:24:51.015199   29946 main.go:141] libmachine: (ha-076992)     <disk type='file' device='cdrom'>
	I0919 19:24:51.015212   29946 main.go:141] libmachine: (ha-076992)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/boot2docker.iso'/>
	I0919 19:24:51.015227   29946 main.go:141] libmachine: (ha-076992)       <target dev='hdc' bus='scsi'/>
	I0919 19:24:51.015247   29946 main.go:141] libmachine: (ha-076992)       <readonly/>
	I0919 19:24:51.015259   29946 main.go:141] libmachine: (ha-076992)     </disk>
	I0919 19:24:51.015272   29946 main.go:141] libmachine: (ha-076992)     <disk type='file' device='disk'>
	I0919 19:24:51.015287   29946 main.go:141] libmachine: (ha-076992)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 19:24:51.015303   29946 main.go:141] libmachine: (ha-076992)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/ha-076992.rawdisk'/>
	I0919 19:24:51.015314   29946 main.go:141] libmachine: (ha-076992)       <target dev='hda' bus='virtio'/>
	I0919 19:24:51.015325   29946 main.go:141] libmachine: (ha-076992)     </disk>
	I0919 19:24:51.015334   29946 main.go:141] libmachine: (ha-076992)     <interface type='network'>
	I0919 19:24:51.015347   29946 main.go:141] libmachine: (ha-076992)       <source network='mk-ha-076992'/>
	I0919 19:24:51.015371   29946 main.go:141] libmachine: (ha-076992)       <model type='virtio'/>
	I0919 19:24:51.015382   29946 main.go:141] libmachine: (ha-076992)     </interface>
	I0919 19:24:51.015392   29946 main.go:141] libmachine: (ha-076992)     <interface type='network'>
	I0919 19:24:51.015402   29946 main.go:141] libmachine: (ha-076992)       <source network='default'/>
	I0919 19:24:51.015412   29946 main.go:141] libmachine: (ha-076992)       <model type='virtio'/>
	I0919 19:24:51.015420   29946 main.go:141] libmachine: (ha-076992)     </interface>
	I0919 19:24:51.015432   29946 main.go:141] libmachine: (ha-076992)     <serial type='pty'>
	I0919 19:24:51.015443   29946 main.go:141] libmachine: (ha-076992)       <target port='0'/>
	I0919 19:24:51.015451   29946 main.go:141] libmachine: (ha-076992)     </serial>
	I0919 19:24:51.015462   29946 main.go:141] libmachine: (ha-076992)     <console type='pty'>
	I0919 19:24:51.015471   29946 main.go:141] libmachine: (ha-076992)       <target type='serial' port='0'/>
	I0919 19:24:51.015502   29946 main.go:141] libmachine: (ha-076992)     </console>
	I0919 19:24:51.015516   29946 main.go:141] libmachine: (ha-076992)     <rng model='virtio'>
	I0919 19:24:51.015528   29946 main.go:141] libmachine: (ha-076992)       <backend model='random'>/dev/random</backend>
	I0919 19:24:51.015538   29946 main.go:141] libmachine: (ha-076992)     </rng>
	I0919 19:24:51.015546   29946 main.go:141] libmachine: (ha-076992)     
	I0919 19:24:51.015554   29946 main.go:141] libmachine: (ha-076992)     
	I0919 19:24:51.015563   29946 main.go:141] libmachine: (ha-076992)   </devices>
	I0919 19:24:51.015571   29946 main.go:141] libmachine: (ha-076992) </domain>
	I0919 19:24:51.015594   29946 main.go:141] libmachine: (ha-076992) 
	I0919 19:24:51.019925   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:db:cf:56 in network default
	I0919 19:24:51.020474   29946 main.go:141] libmachine: (ha-076992) Ensuring networks are active...
	I0919 19:24:51.020498   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:51.021112   29946 main.go:141] libmachine: (ha-076992) Ensuring network default is active
	I0919 19:24:51.021403   29946 main.go:141] libmachine: (ha-076992) Ensuring network mk-ha-076992 is active
	I0919 19:24:51.021908   29946 main.go:141] libmachine: (ha-076992) Getting domain xml...
	I0919 19:24:51.022590   29946 main.go:141] libmachine: (ha-076992) Creating domain...
	I0919 19:24:52.199008   29946 main.go:141] libmachine: (ha-076992) Waiting to get IP...
	I0919 19:24:52.199822   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:52.200184   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:52.200222   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:52.200179   29969 retry.go:31] will retry after 305.917546ms: waiting for machine to come up
	I0919 19:24:52.507816   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:52.508347   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:52.508367   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:52.508306   29969 retry.go:31] will retry after 257.743777ms: waiting for machine to come up
	I0919 19:24:52.767675   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:52.768093   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:52.768147   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:52.768045   29969 retry.go:31] will retry after 451.176186ms: waiting for machine to come up
	I0919 19:24:53.220690   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:53.221075   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:53.221127   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:53.221017   29969 retry.go:31] will retry after 532.893204ms: waiting for machine to come up
	I0919 19:24:53.755758   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:53.756124   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:53.756151   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:53.756077   29969 retry.go:31] will retry after 735.36183ms: waiting for machine to come up
	I0919 19:24:54.492954   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:54.493288   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:54.493311   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:54.493234   29969 retry.go:31] will retry after 820.552907ms: waiting for machine to come up
	I0919 19:24:55.315112   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:55.315416   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:55.315452   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:55.315388   29969 retry.go:31] will retry after 1.159630492s: waiting for machine to come up
	I0919 19:24:56.476212   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:56.476585   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:56.476603   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:56.476554   29969 retry.go:31] will retry after 1.27132767s: waiting for machine to come up
	I0919 19:24:57.749988   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:57.750422   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:57.750445   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:57.750374   29969 retry.go:31] will retry after 1.45971409s: waiting for machine to come up
	I0919 19:24:59.211323   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:59.211646   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:59.211667   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:59.211594   29969 retry.go:31] will retry after 1.806599967s: waiting for machine to come up
	I0919 19:25:01.019773   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:01.020204   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:01.020230   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:01.020169   29969 retry.go:31] will retry after 1.98521469s: waiting for machine to come up
	I0919 19:25:03.008256   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:03.008710   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:03.008731   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:03.008667   29969 retry.go:31] will retry after 3.161929877s: waiting for machine to come up
	I0919 19:25:06.172436   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:06.172851   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:06.172870   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:06.172810   29969 retry.go:31] will retry after 3.065142974s: waiting for machine to come up
	I0919 19:25:09.242150   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:09.242595   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:09.242618   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:09.242551   29969 retry.go:31] will retry after 4.628547568s: waiting for machine to come up
	I0919 19:25:13.875203   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.875628   29946 main.go:141] libmachine: (ha-076992) Found IP for machine: 192.168.39.173
	I0919 19:25:13.875655   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has current primary IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.875661   29946 main.go:141] libmachine: (ha-076992) Reserving static IP address...
	I0919 19:25:13.876020   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find host DHCP lease matching {name: "ha-076992", mac: "52:54:00:7d:f5:95", ip: "192.168.39.173"} in network mk-ha-076992
	I0919 19:25:13.945252   29946 main.go:141] libmachine: (ha-076992) DBG | Getting to WaitForSSH function...
	I0919 19:25:13.945280   29946 main.go:141] libmachine: (ha-076992) Reserved static IP address: 192.168.39.173
	I0919 19:25:13.945289   29946 main.go:141] libmachine: (ha-076992) Waiting for SSH to be available...
	I0919 19:25:13.947766   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.948158   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:13.948194   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.948312   29946 main.go:141] libmachine: (ha-076992) DBG | Using SSH client type: external
	I0919 19:25:13.948335   29946 main.go:141] libmachine: (ha-076992) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa (-rw-------)
	I0919 19:25:13.948378   29946 main.go:141] libmachine: (ha-076992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 19:25:13.948385   29946 main.go:141] libmachine: (ha-076992) DBG | About to run SSH command:
	I0919 19:25:13.948400   29946 main.go:141] libmachine: (ha-076992) DBG | exit 0
	I0919 19:25:14.069031   29946 main.go:141] libmachine: (ha-076992) DBG | SSH cmd err, output: <nil>: 
	I0919 19:25:14.069310   29946 main.go:141] libmachine: (ha-076992) KVM machine creation complete!
	I0919 19:25:14.069628   29946 main.go:141] libmachine: (ha-076992) Calling .GetConfigRaw
	I0919 19:25:14.070250   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:14.070406   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:14.070540   29946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 19:25:14.070554   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:14.072128   29946 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 19:25:14.072140   29946 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 19:25:14.072145   29946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 19:25:14.072151   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.074112   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.074425   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.074456   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.074626   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.074770   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.074885   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.074971   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.075077   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.075278   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.075290   29946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 19:25:14.176659   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:14.176688   29946 main.go:141] libmachine: Detecting the provisioner...
	I0919 19:25:14.176697   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.179372   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.179694   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.179715   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.179850   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.180053   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.180210   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.180361   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.180525   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.180682   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.180691   29946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 19:25:14.282081   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0919 19:25:14.282192   29946 main.go:141] libmachine: found compatible host: buildroot
	I0919 19:25:14.282206   29946 main.go:141] libmachine: Provisioning with buildroot...
	I0919 19:25:14.282215   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:25:14.282509   29946 buildroot.go:166] provisioning hostname "ha-076992"
	I0919 19:25:14.282531   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:25:14.282795   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.286540   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.286900   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.286924   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.287087   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.287264   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.287404   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.287528   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.287657   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.287847   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.287862   29946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992 && echo "ha-076992" | sudo tee /etc/hostname
	I0919 19:25:14.405366   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:25:14.405398   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.408109   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.408451   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.408503   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.408709   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.408884   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.409027   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.409148   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.409275   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.409515   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.409532   29946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:25:14.518352   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:14.518409   29946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:25:14.518432   29946 buildroot.go:174] setting up certificates
	I0919 19:25:14.518441   29946 provision.go:84] configureAuth start
	I0919 19:25:14.518450   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:25:14.518683   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:14.520859   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.521176   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.521197   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.521352   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.523136   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.523477   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.523502   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.523620   29946 provision.go:143] copyHostCerts
	I0919 19:25:14.523651   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:14.523697   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:25:14.523707   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:14.523782   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:25:14.523897   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:14.523925   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:25:14.523934   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:14.523976   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:25:14.524055   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:14.524076   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:25:14.524085   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:14.524119   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:25:14.524203   29946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992 san=[127.0.0.1 192.168.39.173 ha-076992 localhost minikube]
	I0919 19:25:14.665666   29946 provision.go:177] copyRemoteCerts
	I0919 19:25:14.665718   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:25:14.665740   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.668329   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.668676   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.668708   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.668855   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.669012   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.669229   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.669429   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:14.751236   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:25:14.751315   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:25:14.776009   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:25:14.776073   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 19:25:14.800333   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:25:14.800401   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:25:14.824393   29946 provision.go:87] duration metric: took 305.938756ms to configureAuth
	I0919 19:25:14.824421   29946 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:25:14.824627   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:14.824707   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.827604   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.827968   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.827993   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.828193   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.828404   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.828556   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.828663   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.828790   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.829402   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.829444   29946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:25:15.045474   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:25:15.045502   29946 main.go:141] libmachine: Checking connection to Docker...
	I0919 19:25:15.045510   29946 main.go:141] libmachine: (ha-076992) Calling .GetURL
	I0919 19:25:15.046752   29946 main.go:141] libmachine: (ha-076992) DBG | Using libvirt version 6000000
	I0919 19:25:15.048660   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.049036   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.049059   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.049264   29946 main.go:141] libmachine: Docker is up and running!
	I0919 19:25:15.049278   29946 main.go:141] libmachine: Reticulating splines...
	I0919 19:25:15.049284   29946 client.go:171] duration metric: took 24.416985175s to LocalClient.Create
	I0919 19:25:15.049305   29946 start.go:167] duration metric: took 24.417044575s to libmachine.API.Create "ha-076992"
	I0919 19:25:15.049317   29946 start.go:293] postStartSetup for "ha-076992" (driver="kvm2")
	I0919 19:25:15.049330   29946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:25:15.049346   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.049548   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:25:15.049567   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.051882   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.052218   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.052245   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.052457   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.052636   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.052818   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.052959   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:15.135380   29946 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:25:15.139841   29946 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:25:15.139871   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:25:15.139953   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:25:15.140035   29946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:25:15.140047   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:25:15.140142   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:25:15.149803   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:25:15.173954   29946 start.go:296] duration metric: took 124.6206ms for postStartSetup
	I0919 19:25:15.174015   29946 main.go:141] libmachine: (ha-076992) Calling .GetConfigRaw
	I0919 19:25:15.174578   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:15.176983   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.177379   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.177404   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.177609   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:15.177797   29946 start.go:128] duration metric: took 24.563118372s to createHost
	I0919 19:25:15.177822   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.179973   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.180294   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.180319   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.180465   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.180655   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.180790   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.180976   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.181181   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:15.181358   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:15.181374   29946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:25:15.282086   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726773915.259292374
	
	I0919 19:25:15.282107   29946 fix.go:216] guest clock: 1726773915.259292374
	I0919 19:25:15.282114   29946 fix.go:229] Guest: 2024-09-19 19:25:15.259292374 +0000 UTC Remote: 2024-09-19 19:25:15.177809817 +0000 UTC m=+24.663846475 (delta=81.482557ms)
	I0919 19:25:15.282172   29946 fix.go:200] guest clock delta is within tolerance: 81.482557ms
	I0919 19:25:15.282183   29946 start.go:83] releasing machines lock for "ha-076992", held for 24.66762655s
	I0919 19:25:15.282207   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.282416   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:15.285015   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.285310   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.285332   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.285551   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.285982   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.286151   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.286236   29946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:25:15.286279   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.286315   29946 ssh_runner.go:195] Run: cat /version.json
	I0919 19:25:15.286338   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.288664   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.288927   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.288997   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.289024   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.289155   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.289279   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.289305   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.289315   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.289547   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.289548   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.289752   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.289745   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:15.289876   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.289970   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:15.362421   29946 ssh_runner.go:195] Run: systemctl --version
	I0919 19:25:15.387771   29946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:25:15.544684   29946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:25:15.550599   29946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:25:15.550653   29946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:25:15.566463   29946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 19:25:15.566486   29946 start.go:495] detecting cgroup driver to use...
	I0919 19:25:15.566538   29946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:25:15.582773   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:25:15.596900   29946 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:25:15.596957   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:25:15.610508   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:25:15.624376   29946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:25:15.733813   29946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:25:15.878726   29946 docker.go:233] disabling docker service ...
	I0919 19:25:15.878810   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:25:15.892801   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:25:15.905716   29946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:25:16.030572   29946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:25:16.160731   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:25:16.174416   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:25:16.192761   29946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:25:16.192830   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.203609   29946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:25:16.203677   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.214426   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.225032   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.235752   29946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:25:16.247045   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.258205   29946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.275682   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.286480   29946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:25:16.296369   29946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 19:25:16.296429   29946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 19:25:16.310714   29946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:25:16.321030   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:25:16.442591   29946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:25:16.537253   29946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:25:16.537333   29946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:25:16.542338   29946 start.go:563] Will wait 60s for crictl version
	I0919 19:25:16.542399   29946 ssh_runner.go:195] Run: which crictl
	I0919 19:25:16.546294   29946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:25:16.588011   29946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:25:16.588101   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:25:16.616308   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:25:16.647185   29946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:25:16.648600   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:16.651059   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:16.651358   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:16.651387   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:16.651601   29946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:25:16.655720   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:25:16.669431   29946 kubeadm.go:883] updating cluster {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 19:25:16.669533   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:25:16.669573   29946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:25:16.706546   29946 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0919 19:25:16.706605   29946 ssh_runner.go:195] Run: which lz4
	I0919 19:25:16.710770   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0919 19:25:16.710856   29946 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 19:25:16.715145   29946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 19:25:16.715174   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0919 19:25:18.046106   29946 crio.go:462] duration metric: took 1.335269784s to copy over tarball
	I0919 19:25:18.046183   29946 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 19:25:20.022215   29946 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.975997168s)
	I0919 19:25:20.022248   29946 crio.go:469] duration metric: took 1.976118647s to extract the tarball
	I0919 19:25:20.022255   29946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 19:25:20.059151   29946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:25:20.102732   29946 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:25:20.102759   29946 cache_images.go:84] Images are preloaded, skipping loading
	I0919 19:25:20.102769   29946 kubeadm.go:934] updating node { 192.168.39.173 8443 v1.31.1 crio true true} ...
	I0919 19:25:20.102901   29946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:25:20.102991   29946 ssh_runner.go:195] Run: crio config
	I0919 19:25:20.149091   29946 cni.go:84] Creating CNI manager for ""
	I0919 19:25:20.149117   29946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 19:25:20.149129   29946 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 19:25:20.149151   29946 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076992 NodeName:ha-076992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 19:25:20.149390   29946 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 19:25:20.149434   29946 kube-vip.go:115] generating kube-vip config ...
	I0919 19:25:20.149487   29946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:25:20.167402   29946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:25:20.167516   29946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:25:20.167589   29946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:25:20.177872   29946 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 19:25:20.177945   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 19:25:20.187340   29946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0919 19:25:20.203708   29946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:25:20.219797   29946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0919 19:25:20.236038   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0919 19:25:20.251815   29946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:25:20.255527   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:25:20.267874   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:25:20.389268   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:25:20.406525   29946 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.173
	I0919 19:25:20.406544   29946 certs.go:194] generating shared ca certs ...
	I0919 19:25:20.406562   29946 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.406708   29946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:25:20.406775   29946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:25:20.406789   29946 certs.go:256] generating profile certs ...
	I0919 19:25:20.406855   29946 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:25:20.406880   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt with IP's: []
	I0919 19:25:20.508433   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt ...
	I0919 19:25:20.508466   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt: {Name:mkfa51b5957d9c0689bd29c9d7ac67976197d1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.508644   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key ...
	I0919 19:25:20.508659   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key: {Name:mke8583745dcb3fd2e449775522b103cfe463401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.508755   29946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77
	I0919 19:25:20.508774   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.254]
	I0919 19:25:20.790439   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77 ...
	I0919 19:25:20.790476   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77: {Name:mk129f473c8ca2bf9c282104464393dd4c0e2ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.790661   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77 ...
	I0919 19:25:20.790678   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77: {Name:mk3e710a4268d5f56461b3aadb1485c362a2d2c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.790775   29946 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:25:20.790887   29946 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:25:20.790975   29946 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:25:20.790995   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt with IP's: []
	I0919 19:25:20.971771   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt ...
	I0919 19:25:20.971802   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt: {Name:mk0aab9d02f395e9da9c35e7e8f603cb6b5cdfc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.971977   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key ...
	I0919 19:25:20.971992   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key: {Name:mke99ffbb66c5a7dba2706f1581886421c464464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.972083   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:25:20.972116   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:25:20.972133   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:25:20.972152   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:25:20.972170   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:25:20.972189   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:25:20.972210   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:25:20.972227   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:25:20.972297   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:25:20.972349   29946 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:25:20.972361   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:25:20.972459   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:25:20.972537   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:25:20.972573   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:25:20.972635   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:25:20.972677   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:20.972699   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:25:20.972718   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:25:20.973287   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:25:20.998208   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:25:21.020664   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:25:21.043465   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:25:21.065487   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 19:25:21.087887   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 19:25:21.110693   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:25:21.134315   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:25:21.159427   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:25:21.209793   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:25:21.234146   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:25:21.256777   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 19:25:21.273318   29946 ssh_runner.go:195] Run: openssl version
	I0919 19:25:21.279164   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:25:21.290077   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:25:21.294953   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:25:21.295015   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:25:21.301042   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:25:21.311548   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:25:21.322467   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:25:21.326955   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:25:21.327033   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:25:21.332698   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:25:21.343007   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:25:21.353411   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:21.357905   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:21.357956   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:21.363494   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:25:21.373947   29946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:25:21.378011   29946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 19:25:21.378067   29946 kubeadm.go:392] StartCluster: {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:25:21.378145   29946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 19:25:21.378195   29946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 19:25:21.414470   29946 cri.go:89] found id: ""
	I0919 19:25:21.414537   29946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 19:25:21.424173   29946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 19:25:21.433474   29946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 19:25:21.442569   29946 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 19:25:21.442585   29946 kubeadm.go:157] found existing configuration files:
	
	I0919 19:25:21.442641   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 19:25:21.456054   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 19:25:21.456094   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 19:25:21.465434   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 19:25:21.474456   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 19:25:21.474516   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 19:25:21.483588   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 19:25:21.492486   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 19:25:21.492535   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 19:25:21.501852   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 19:25:21.510898   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 19:25:21.510940   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 19:25:21.520189   29946 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 19:25:21.636110   29946 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 19:25:21.636193   29946 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 19:25:21.741569   29946 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 19:25:21.741692   29946 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 19:25:21.741840   29946 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 19:25:21.751361   29946 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 19:25:21.850204   29946 out.go:235]   - Generating certificates and keys ...
	I0919 19:25:21.850323   29946 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 19:25:21.850411   29946 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 19:25:22.052364   29946 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 19:25:22.111035   29946 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 19:25:22.319537   29946 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 19:25:22.387119   29946 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 19:25:22.515422   29946 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 19:25:22.515564   29946 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-076992 localhost] and IPs [192.168.39.173 127.0.0.1 ::1]
	I0919 19:25:22.770343   29946 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 19:25:22.770549   29946 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-076992 localhost] and IPs [192.168.39.173 127.0.0.1 ::1]
	I0919 19:25:22.940962   29946 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 19:25:23.141337   29946 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 19:25:23.227103   29946 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 19:25:23.227182   29946 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 19:25:23.339999   29946 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 19:25:23.488595   29946 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 19:25:23.642974   29946 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 19:25:23.798144   29946 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 19:25:24.008881   29946 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 19:25:24.009486   29946 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 19:25:24.014369   29946 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 19:25:24.145863   29946 out.go:235]   - Booting up control plane ...
	I0919 19:25:24.146000   29946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 19:25:24.146123   29946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 19:25:24.146222   29946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 19:25:24.146351   29946 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 19:25:24.146497   29946 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 19:25:24.146584   29946 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 19:25:24.164755   29946 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 19:25:24.164947   29946 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 19:25:24.666140   29946 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.684085ms
	I0919 19:25:24.666245   29946 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 19:25:30.661904   29946 kubeadm.go:310] [api-check] The API server is healthy after 5.999328933s
	I0919 19:25:30.674821   29946 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 19:25:30.694689   29946 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 19:25:30.728456   29946 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 19:25:30.728705   29946 kubeadm.go:310] [mark-control-plane] Marking the node ha-076992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 19:25:30.742484   29946 kubeadm.go:310] [bootstrap-token] Using token: 9riz07.p2i93yajbhhfpock
	I0919 19:25:30.744002   29946 out.go:235]   - Configuring RBAC rules ...
	I0919 19:25:30.744156   29946 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 19:25:30.749173   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 19:25:30.770991   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 19:25:30.778177   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 19:25:30.786779   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 19:25:30.790121   29946 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 19:25:31.069223   29946 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 19:25:31.498557   29946 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 19:25:32.068354   29946 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 19:25:32.068406   29946 kubeadm.go:310] 
	I0919 19:25:32.068512   29946 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 19:25:32.068526   29946 kubeadm.go:310] 
	I0919 19:25:32.068652   29946 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 19:25:32.068663   29946 kubeadm.go:310] 
	I0919 19:25:32.068714   29946 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 19:25:32.068809   29946 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 19:25:32.068885   29946 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 19:25:32.068895   29946 kubeadm.go:310] 
	I0919 19:25:32.068999   29946 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 19:25:32.069019   29946 kubeadm.go:310] 
	I0919 19:25:32.069122   29946 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 19:25:32.069135   29946 kubeadm.go:310] 
	I0919 19:25:32.069210   29946 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 19:25:32.069312   29946 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 19:25:32.069415   29946 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 19:25:32.069425   29946 kubeadm.go:310] 
	I0919 19:25:32.069540   29946 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 19:25:32.069660   29946 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 19:25:32.069677   29946 kubeadm.go:310] 
	I0919 19:25:32.069794   29946 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9riz07.p2i93yajbhhfpock \
	I0919 19:25:32.069948   29946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 \
	I0919 19:25:32.069992   29946 kubeadm.go:310] 	--control-plane 
	I0919 19:25:32.070002   29946 kubeadm.go:310] 
	I0919 19:25:32.070125   29946 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 19:25:32.070153   29946 kubeadm.go:310] 
	I0919 19:25:32.070277   29946 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9riz07.p2i93yajbhhfpock \
	I0919 19:25:32.070418   29946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 
	I0919 19:25:32.071077   29946 kubeadm.go:310] W0919 19:25:21.617150     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 19:25:32.071492   29946 kubeadm.go:310] W0919 19:25:21.618100     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 19:25:32.071645   29946 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 19:25:32.071673   29946 cni.go:84] Creating CNI manager for ""
	I0919 19:25:32.071683   29946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 19:25:32.073578   29946 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 19:25:32.075092   29946 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 19:25:32.080797   29946 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0919 19:25:32.080815   29946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 19:25:32.099353   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 19:25:32.484244   29946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 19:25:32.484317   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:32.484356   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076992 minikube.k8s.io/updated_at=2024_09_19T19_25_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=ha-076992 minikube.k8s.io/primary=true
	I0919 19:25:32.699563   29946 ops.go:34] apiserver oom_adj: -16
	I0919 19:25:32.700092   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:33.200174   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:33.700760   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:34.200308   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:34.700609   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:35.200998   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:35.700578   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:35.798072   29946 kubeadm.go:1113] duration metric: took 3.313794341s to wait for elevateKubeSystemPrivileges
	I0919 19:25:35.798118   29946 kubeadm.go:394] duration metric: took 14.420052871s to StartCluster
	I0919 19:25:35.798147   29946 settings.go:142] acquiring lock: {Name:mk58f627f177d13dd5c0d47e681e886cab43cce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:35.798243   29946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:25:35.799184   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/kubeconfig: {Name:mk632e082e805bb0ee3f336087f78588814f24af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:35.799451   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 19:25:35.799465   29946 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:25:35.799491   29946 start.go:241] waiting for startup goroutines ...
	I0919 19:25:35.799511   29946 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 19:25:35.799597   29946 addons.go:69] Setting storage-provisioner=true in profile "ha-076992"
	I0919 19:25:35.799613   29946 addons.go:234] Setting addon storage-provisioner=true in "ha-076992"
	I0919 19:25:35.799618   29946 addons.go:69] Setting default-storageclass=true in profile "ha-076992"
	I0919 19:25:35.799636   29946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-076992"
	I0919 19:25:35.799646   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:25:35.799697   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:35.800027   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.800066   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.800097   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.800144   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.815590   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0919 19:25:35.815605   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I0919 19:25:35.816049   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.816088   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.816567   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.816586   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.816689   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.816710   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.816987   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.817114   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.817220   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:35.817668   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.817714   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.819378   29946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:25:35.819715   29946 kapi.go:59] client config for ha-076992: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 19:25:35.820225   29946 cert_rotation.go:140] Starting client certificate rotation controller
	I0919 19:25:35.820487   29946 addons.go:234] Setting addon default-storageclass=true in "ha-076992"
	I0919 19:25:35.820530   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:25:35.820906   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.820951   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.833309   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0919 19:25:35.833766   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.834301   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.834327   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.834689   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.834900   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:35.835942   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0919 19:25:35.836351   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.836799   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.836819   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.837143   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:35.837207   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.837734   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.837784   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.839005   29946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 19:25:35.840904   29946 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 19:25:35.840925   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 19:25:35.840944   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:35.844561   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.845133   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:35.845270   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.845469   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:35.845677   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:35.845845   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:35.845998   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:35.854128   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0919 19:25:35.854570   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.855071   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.855094   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.855375   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.855571   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:35.857281   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:35.857490   29946 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 19:25:35.857507   29946 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 19:25:35.857525   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:35.860312   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.860745   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:35.860772   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.860889   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:35.861048   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:35.861242   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:35.861376   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:35.927743   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 19:25:36.004938   29946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 19:25:36.013596   29946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 19:25:36.335279   29946 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 19:25:36.504465   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504493   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.504491   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504508   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.504762   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.504781   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.504790   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504802   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.504875   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.504890   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.504900   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.504904   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504916   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.505030   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.505034   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.505041   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.505114   29946 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 19:25:36.505136   29946 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 19:25:36.505210   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.505215   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.505222   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.505242   29946 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0919 19:25:36.505249   29946 round_trippers.go:469] Request Headers:
	I0919 19:25:36.505260   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:25:36.505265   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:25:36.515769   29946 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0919 19:25:36.516537   29946 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0919 19:25:36.516554   29946 round_trippers.go:469] Request Headers:
	I0919 19:25:36.516565   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:25:36.516572   29946 round_trippers.go:473]     Content-Type: application/json
	I0919 19:25:36.516581   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:25:36.519463   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:25:36.519632   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.519650   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.519937   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.519949   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.519960   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.522604   29946 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0919 19:25:36.523991   29946 addons.go:510] duration metric: took 724.482922ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 19:25:36.524039   29946 start.go:246] waiting for cluster config update ...
	I0919 19:25:36.524053   29946 start.go:255] writing updated cluster config ...
	I0919 19:25:36.525729   29946 out.go:201] 
	I0919 19:25:36.527177   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:36.527269   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:36.528940   29946 out.go:177] * Starting "ha-076992-m02" control-plane node in "ha-076992" cluster
	I0919 19:25:36.530205   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:25:36.530230   29946 cache.go:56] Caching tarball of preloaded images
	I0919 19:25:36.530345   29946 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:25:36.530360   29946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:25:36.530451   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:36.530647   29946 start.go:360] acquireMachinesLock for ha-076992-m02: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:25:36.530701   29946 start.go:364] duration metric: took 30.765µs to acquireMachinesLock for "ha-076992-m02"
	I0919 19:25:36.530723   29946 start.go:93] Provisioning new machine with config: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:25:36.530820   29946 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0919 19:25:36.532606   29946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 19:25:36.532678   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:36.532710   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:36.547137   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0919 19:25:36.547545   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:36.547997   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:36.548015   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:36.548367   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:36.548567   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:36.548746   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:36.548944   29946 start.go:159] libmachine.API.Create for "ha-076992" (driver="kvm2")
	I0919 19:25:36.548973   29946 client.go:168] LocalClient.Create starting
	I0919 19:25:36.549008   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem
	I0919 19:25:36.549050   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:25:36.549087   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:25:36.549192   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem
	I0919 19:25:36.549240   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:25:36.549257   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:25:36.549297   29946 main.go:141] libmachine: Running pre-create checks...
	I0919 19:25:36.549316   29946 main.go:141] libmachine: (ha-076992-m02) Calling .PreCreateCheck
	I0919 19:25:36.549515   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetConfigRaw
	I0919 19:25:36.549909   29946 main.go:141] libmachine: Creating machine...
	I0919 19:25:36.549924   29946 main.go:141] libmachine: (ha-076992-m02) Calling .Create
	I0919 19:25:36.550052   29946 main.go:141] libmachine: (ha-076992-m02) Creating KVM machine...
	I0919 19:25:36.551192   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found existing default KVM network
	I0919 19:25:36.551300   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found existing private KVM network mk-ha-076992
	I0919 19:25:36.551429   29946 main.go:141] libmachine: (ha-076992-m02) Setting up store path in /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02 ...
	I0919 19:25:36.551455   29946 main.go:141] libmachine: (ha-076992-m02) Building disk image from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 19:25:36.551523   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.551412   30305 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:25:36.551615   29946 main.go:141] libmachine: (ha-076992-m02) Downloading /home/jenkins/minikube-integration/19664-7917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0919 19:25:36.777277   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.777143   30305 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa...
	I0919 19:25:36.934632   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.934510   30305 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/ha-076992-m02.rawdisk...
	I0919 19:25:36.934655   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Writing magic tar header
	I0919 19:25:36.934666   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Writing SSH key tar header
	I0919 19:25:36.934677   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.934643   30305 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02 ...
	I0919 19:25:36.934732   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02
	I0919 19:25:36.934753   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines
	I0919 19:25:36.934762   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02 (perms=drwx------)
	I0919 19:25:36.934775   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:25:36.934789   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917
	I0919 19:25:36.934801   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 19:25:36.934811   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins
	I0919 19:25:36.934821   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines (perms=drwxr-xr-x)
	I0919 19:25:36.934826   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home
	I0919 19:25:36.934834   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Skipping /home - not owner
	I0919 19:25:36.934842   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube (perms=drwxr-xr-x)
	I0919 19:25:36.934852   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917 (perms=drwxrwxr-x)
	I0919 19:25:36.934866   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 19:25:36.934884   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 19:25:36.934911   29946 main.go:141] libmachine: (ha-076992-m02) Creating domain...
	I0919 19:25:36.935720   29946 main.go:141] libmachine: (ha-076992-m02) define libvirt domain using xml: 
	I0919 19:25:36.935740   29946 main.go:141] libmachine: (ha-076992-m02) <domain type='kvm'>
	I0919 19:25:36.935750   29946 main.go:141] libmachine: (ha-076992-m02)   <name>ha-076992-m02</name>
	I0919 19:25:36.935757   29946 main.go:141] libmachine: (ha-076992-m02)   <memory unit='MiB'>2200</memory>
	I0919 19:25:36.935765   29946 main.go:141] libmachine: (ha-076992-m02)   <vcpu>2</vcpu>
	I0919 19:25:36.935775   29946 main.go:141] libmachine: (ha-076992-m02)   <features>
	I0919 19:25:36.935783   29946 main.go:141] libmachine: (ha-076992-m02)     <acpi/>
	I0919 19:25:36.935792   29946 main.go:141] libmachine: (ha-076992-m02)     <apic/>
	I0919 19:25:36.935799   29946 main.go:141] libmachine: (ha-076992-m02)     <pae/>
	I0919 19:25:36.935808   29946 main.go:141] libmachine: (ha-076992-m02)     
	I0919 19:25:36.935823   29946 main.go:141] libmachine: (ha-076992-m02)   </features>
	I0919 19:25:36.935834   29946 main.go:141] libmachine: (ha-076992-m02)   <cpu mode='host-passthrough'>
	I0919 19:25:36.935839   29946 main.go:141] libmachine: (ha-076992-m02)   
	I0919 19:25:36.935844   29946 main.go:141] libmachine: (ha-076992-m02)   </cpu>
	I0919 19:25:36.935849   29946 main.go:141] libmachine: (ha-076992-m02)   <os>
	I0919 19:25:36.935856   29946 main.go:141] libmachine: (ha-076992-m02)     <type>hvm</type>
	I0919 19:25:36.935861   29946 main.go:141] libmachine: (ha-076992-m02)     <boot dev='cdrom'/>
	I0919 19:25:36.935865   29946 main.go:141] libmachine: (ha-076992-m02)     <boot dev='hd'/>
	I0919 19:25:36.935876   29946 main.go:141] libmachine: (ha-076992-m02)     <bootmenu enable='no'/>
	I0919 19:25:36.935883   29946 main.go:141] libmachine: (ha-076992-m02)   </os>
	I0919 19:25:36.935888   29946 main.go:141] libmachine: (ha-076992-m02)   <devices>
	I0919 19:25:36.935893   29946 main.go:141] libmachine: (ha-076992-m02)     <disk type='file' device='cdrom'>
	I0919 19:25:36.935901   29946 main.go:141] libmachine: (ha-076992-m02)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/boot2docker.iso'/>
	I0919 19:25:36.935911   29946 main.go:141] libmachine: (ha-076992-m02)       <target dev='hdc' bus='scsi'/>
	I0919 19:25:36.935916   29946 main.go:141] libmachine: (ha-076992-m02)       <readonly/>
	I0919 19:25:36.935923   29946 main.go:141] libmachine: (ha-076992-m02)     </disk>
	I0919 19:25:36.935931   29946 main.go:141] libmachine: (ha-076992-m02)     <disk type='file' device='disk'>
	I0919 19:25:36.935939   29946 main.go:141] libmachine: (ha-076992-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 19:25:36.935946   29946 main.go:141] libmachine: (ha-076992-m02)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/ha-076992-m02.rawdisk'/>
	I0919 19:25:36.935951   29946 main.go:141] libmachine: (ha-076992-m02)       <target dev='hda' bus='virtio'/>
	I0919 19:25:36.935958   29946 main.go:141] libmachine: (ha-076992-m02)     </disk>
	I0919 19:25:36.935962   29946 main.go:141] libmachine: (ha-076992-m02)     <interface type='network'>
	I0919 19:25:36.935970   29946 main.go:141] libmachine: (ha-076992-m02)       <source network='mk-ha-076992'/>
	I0919 19:25:36.935974   29946 main.go:141] libmachine: (ha-076992-m02)       <model type='virtio'/>
	I0919 19:25:36.935980   29946 main.go:141] libmachine: (ha-076992-m02)     </interface>
	I0919 19:25:36.935987   29946 main.go:141] libmachine: (ha-076992-m02)     <interface type='network'>
	I0919 19:25:36.935994   29946 main.go:141] libmachine: (ha-076992-m02)       <source network='default'/>
	I0919 19:25:36.935999   29946 main.go:141] libmachine: (ha-076992-m02)       <model type='virtio'/>
	I0919 19:25:36.936006   29946 main.go:141] libmachine: (ha-076992-m02)     </interface>
	I0919 19:25:36.936010   29946 main.go:141] libmachine: (ha-076992-m02)     <serial type='pty'>
	I0919 19:25:36.936015   29946 main.go:141] libmachine: (ha-076992-m02)       <target port='0'/>
	I0919 19:25:36.936021   29946 main.go:141] libmachine: (ha-076992-m02)     </serial>
	I0919 19:25:36.936026   29946 main.go:141] libmachine: (ha-076992-m02)     <console type='pty'>
	I0919 19:25:36.936033   29946 main.go:141] libmachine: (ha-076992-m02)       <target type='serial' port='0'/>
	I0919 19:25:36.936037   29946 main.go:141] libmachine: (ha-076992-m02)     </console>
	I0919 19:25:36.936041   29946 main.go:141] libmachine: (ha-076992-m02)     <rng model='virtio'>
	I0919 19:25:36.936048   29946 main.go:141] libmachine: (ha-076992-m02)       <backend model='random'>/dev/random</backend>
	I0919 19:25:36.936052   29946 main.go:141] libmachine: (ha-076992-m02)     </rng>
	I0919 19:25:36.936057   29946 main.go:141] libmachine: (ha-076992-m02)     
	I0919 19:25:36.936065   29946 main.go:141] libmachine: (ha-076992-m02)     
	I0919 19:25:36.936070   29946 main.go:141] libmachine: (ha-076992-m02)   </devices>
	I0919 19:25:36.936080   29946 main.go:141] libmachine: (ha-076992-m02) </domain>
	I0919 19:25:36.936086   29946 main.go:141] libmachine: (ha-076992-m02) 
	I0919 19:25:36.942900   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:0e:87:b8 in network default
	I0919 19:25:36.943479   29946 main.go:141] libmachine: (ha-076992-m02) Ensuring networks are active...
	I0919 19:25:36.943509   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:36.944120   29946 main.go:141] libmachine: (ha-076992-m02) Ensuring network default is active
	I0919 19:25:36.944391   29946 main.go:141] libmachine: (ha-076992-m02) Ensuring network mk-ha-076992 is active
	I0919 19:25:36.944707   29946 main.go:141] libmachine: (ha-076992-m02) Getting domain xml...
	I0919 19:25:36.945497   29946 main.go:141] libmachine: (ha-076992-m02) Creating domain...
	I0919 19:25:38.180680   29946 main.go:141] libmachine: (ha-076992-m02) Waiting to get IP...
	I0919 19:25:38.181469   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:38.181903   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:38.181932   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:38.181877   30305 retry.go:31] will retry after 244.203763ms: waiting for machine to come up
	I0919 19:25:38.427374   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:38.427795   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:38.427822   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:38.427757   30305 retry.go:31] will retry after 281.507755ms: waiting for machine to come up
	I0919 19:25:38.711466   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:38.711935   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:38.711962   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:38.711890   30305 retry.go:31] will retry after 465.962788ms: waiting for machine to come up
	I0919 19:25:39.179211   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:39.179652   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:39.179684   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:39.179602   30305 retry.go:31] will retry after 602.174018ms: waiting for machine to come up
	I0919 19:25:39.783380   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:39.783868   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:39.783897   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:39.783820   30305 retry.go:31] will retry after 752.65735ms: waiting for machine to come up
	I0919 19:25:40.537821   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:40.538325   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:40.538351   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:40.538278   30305 retry.go:31] will retry after 659.774912ms: waiting for machine to come up
	I0919 19:25:41.200055   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:41.200443   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:41.200472   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:41.200416   30305 retry.go:31] will retry after 933.838274ms: waiting for machine to come up
	I0919 19:25:42.135781   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:42.136230   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:42.136260   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:42.136180   30305 retry.go:31] will retry after 1.469374699s: waiting for machine to come up
	I0919 19:25:43.606700   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:43.607102   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:43.607128   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:43.607064   30305 retry.go:31] will retry after 1.652950342s: waiting for machine to come up
	I0919 19:25:45.261341   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:45.261788   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:45.261815   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:45.261744   30305 retry.go:31] will retry after 1.905868131s: waiting for machine to come up
	I0919 19:25:47.169717   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:47.170193   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:47.170220   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:47.170129   30305 retry.go:31] will retry after 2.065748875s: waiting for machine to come up
	I0919 19:25:49.238320   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:49.238667   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:49.238694   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:49.238621   30305 retry.go:31] will retry after 2.815922548s: waiting for machine to come up
	I0919 19:25:52.055810   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:52.056201   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:52.056225   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:52.056152   30305 retry.go:31] will retry after 2.765202997s: waiting for machine to come up
	I0919 19:25:54.825094   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:54.825576   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:54.825607   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:54.825532   30305 retry.go:31] will retry after 3.746769052s: waiting for machine to come up
	I0919 19:25:58.574430   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.574995   29946 main.go:141] libmachine: (ha-076992-m02) Found IP for machine: 192.168.39.232
	I0919 19:25:58.575023   29946 main.go:141] libmachine: (ha-076992-m02) Reserving static IP address...
	I0919 19:25:58.575036   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has current primary IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.575526   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find host DHCP lease matching {name: "ha-076992-m02", mac: "52:54:00:5f:39:42", ip: "192.168.39.232"} in network mk-ha-076992
	I0919 19:25:58.646823   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Getting to WaitForSSH function...
	I0919 19:25:58.646849   29946 main.go:141] libmachine: (ha-076992-m02) Reserved static IP address: 192.168.39.232
	I0919 19:25:58.646862   29946 main.go:141] libmachine: (ha-076992-m02) Waiting for SSH to be available...
	I0919 19:25:58.649682   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.650123   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:58.650200   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.650328   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Using SSH client type: external
	I0919 19:25:58.650350   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa (-rw-------)
	I0919 19:25:58.650383   29946 main.go:141] libmachine: (ha-076992-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 19:25:58.650401   29946 main.go:141] libmachine: (ha-076992-m02) DBG | About to run SSH command:
	I0919 19:25:58.650416   29946 main.go:141] libmachine: (ha-076992-m02) DBG | exit 0
	I0919 19:25:58.777771   29946 main.go:141] libmachine: (ha-076992-m02) DBG | SSH cmd err, output: <nil>: 
	I0919 19:25:58.778064   29946 main.go:141] libmachine: (ha-076992-m02) KVM machine creation complete!
	I0919 19:25:58.778379   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetConfigRaw
	I0919 19:25:58.778927   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:58.779131   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:58.779306   29946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 19:25:58.779329   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetState
	I0919 19:25:58.780634   29946 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 19:25:58.780650   29946 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 19:25:58.780657   29946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 19:25:58.780663   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:58.783144   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.783573   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:58.783595   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.783851   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:58.784010   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.784179   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.784350   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:58.784515   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:58.784730   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:58.784742   29946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 19:25:58.888256   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:58.888282   29946 main.go:141] libmachine: Detecting the provisioner...
	I0919 19:25:58.888293   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:58.891062   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.891412   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:58.891443   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.891627   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:58.891808   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.891961   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.892118   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:58.892285   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:58.892465   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:58.892476   29946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 19:25:58.997853   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0919 19:25:58.997904   29946 main.go:141] libmachine: found compatible host: buildroot
	I0919 19:25:58.997917   29946 main.go:141] libmachine: Provisioning with buildroot...
	I0919 19:25:58.997926   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:58.998154   29946 buildroot.go:166] provisioning hostname "ha-076992-m02"
	I0919 19:25:58.998180   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:58.998363   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.001218   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.001600   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.001625   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.001769   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.001924   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.002057   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.002199   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.002363   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.002512   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.002523   29946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992-m02 && echo "ha-076992-m02" | sudo tee /etc/hostname
	I0919 19:25:59.119914   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992-m02
	
	I0919 19:25:59.119943   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.122597   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.122932   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.122959   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.123102   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.123288   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.123386   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.123535   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.123663   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.123816   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.123831   29946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:25:59.234249   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:59.234283   29946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:25:59.234304   29946 buildroot.go:174] setting up certificates
	I0919 19:25:59.234313   29946 provision.go:84] configureAuth start
	I0919 19:25:59.234321   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:59.234593   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:25:59.237517   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.237906   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.237938   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.238086   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.240541   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.240911   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.240937   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.241052   29946 provision.go:143] copyHostCerts
	I0919 19:25:59.241116   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:59.241157   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:25:59.241168   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:59.241245   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:25:59.241332   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:59.241361   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:25:59.241371   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:59.241408   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:25:59.241468   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:59.241492   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:25:59.241501   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:59.241533   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:25:59.241596   29946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992-m02 san=[127.0.0.1 192.168.39.232 ha-076992-m02 localhost minikube]
	I0919 19:25:59.357826   29946 provision.go:177] copyRemoteCerts
	I0919 19:25:59.357894   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:25:59.357924   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.360530   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.360884   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.360911   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.361149   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.361317   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.361482   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.361595   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:25:59.443240   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:25:59.443310   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:25:59.469433   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:25:59.469519   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 19:25:59.495952   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:25:59.496024   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:25:59.522724   29946 provision.go:87] duration metric: took 288.400561ms to configureAuth
	I0919 19:25:59.522748   29946 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:25:59.522917   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:59.522985   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.525520   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.525889   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.525912   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.526077   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.526238   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.526387   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.526517   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.526656   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.526814   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.526826   29946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:25:59.752869   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:25:59.752893   29946 main.go:141] libmachine: Checking connection to Docker...
	I0919 19:25:59.752905   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetURL
	I0919 19:25:59.754292   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Using libvirt version 6000000
	I0919 19:25:59.756429   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.756753   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.756775   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.756952   29946 main.go:141] libmachine: Docker is up and running!
	I0919 19:25:59.756967   29946 main.go:141] libmachine: Reticulating splines...
	I0919 19:25:59.756974   29946 client.go:171] duration metric: took 23.20799249s to LocalClient.Create
	I0919 19:25:59.756996   29946 start.go:167] duration metric: took 23.208049551s to libmachine.API.Create "ha-076992"
	I0919 19:25:59.757009   29946 start.go:293] postStartSetup for "ha-076992-m02" (driver="kvm2")
	I0919 19:25:59.757026   29946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:25:59.757049   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:59.757304   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:25:59.757329   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.759641   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.760058   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.760084   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.760219   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.760398   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.760511   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.760656   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:25:59.843621   29946 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:25:59.848206   29946 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:25:59.848232   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:25:59.848296   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:25:59.848392   29946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:25:59.848404   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:25:59.848515   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:25:59.858316   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:25:59.885251   29946 start.go:296] duration metric: took 128.22453ms for postStartSetup
	I0919 19:25:59.885295   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetConfigRaw
	I0919 19:25:59.885821   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:25:59.888318   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.888680   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.888708   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.888945   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:59.889154   29946 start.go:128] duration metric: took 23.358320855s to createHost
	I0919 19:25:59.889176   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.891311   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.891643   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.891660   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.891792   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.891944   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.892068   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.892176   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.892294   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.892443   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.892452   29946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:26:00.002053   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726773959.961389731
	
	I0919 19:26:00.002074   29946 fix.go:216] guest clock: 1726773959.961389731
	I0919 19:26:00.002082   29946 fix.go:229] Guest: 2024-09-19 19:25:59.961389731 +0000 UTC Remote: 2024-09-19 19:25:59.889165721 +0000 UTC m=+69.375202371 (delta=72.22401ms)
	I0919 19:26:00.002098   29946 fix.go:200] guest clock delta is within tolerance: 72.22401ms
	I0919 19:26:00.002103   29946 start.go:83] releasing machines lock for "ha-076992-m02", held for 23.47139118s
	I0919 19:26:00.002120   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.002405   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:26:00.005381   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.005748   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:00.005768   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.008103   29946 out.go:177] * Found network options:
	I0919 19:26:00.009556   29946 out.go:177]   - NO_PROXY=192.168.39.173
	W0919 19:26:00.010768   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:26:00.010799   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.011365   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.011545   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.011641   29946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:26:00.011680   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	W0919 19:26:00.011835   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:26:00.011913   29946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:26:00.011935   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:26:00.014635   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.014741   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.015053   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:00.015078   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.015105   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:00.015122   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.015192   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:26:00.015389   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:26:00.015425   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:26:00.015551   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:26:00.015586   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:26:00.015680   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:26:00.015686   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:26:00.015847   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:26:00.243733   29946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:26:00.250260   29946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:26:00.250318   29946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:26:00.266157   29946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 19:26:00.266187   29946 start.go:495] detecting cgroup driver to use...
	I0919 19:26:00.266257   29946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:26:00.284373   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:26:00.299098   29946 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:26:00.299161   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:26:00.313776   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:26:00.328144   29946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:26:00.450118   29946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:26:00.592879   29946 docker.go:233] disabling docker service ...
	I0919 19:26:00.592942   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:26:00.607656   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:26:00.620367   29946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:26:00.756551   29946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:26:00.888081   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:26:00.901911   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:26:00.920807   29946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:26:00.920876   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.931652   29946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:26:00.931715   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.944741   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.955512   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.966422   29946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:26:00.977466   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.988029   29946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:01.011140   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:01.022261   29946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:26:01.031891   29946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 19:26:01.031944   29946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 19:26:01.044785   29946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:26:01.054444   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:26:01.182828   29946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:26:01.272829   29946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:26:01.272907   29946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:26:01.277937   29946 start.go:563] Will wait 60s for crictl version
	I0919 19:26:01.277997   29946 ssh_runner.go:195] Run: which crictl
	I0919 19:26:01.282022   29946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:26:01.321749   29946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:26:01.321825   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:26:01.350681   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:26:01.380754   29946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:26:01.382497   29946 out.go:177]   - env NO_PROXY=192.168.39.173
	I0919 19:26:01.383753   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:26:01.386332   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:01.386661   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:01.386690   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:01.386880   29946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:26:01.391190   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:26:01.403767   29946 mustload.go:65] Loading cluster: ha-076992
	I0919 19:26:01.403960   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:26:01.404199   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:01.404248   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:01.418919   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0919 19:26:01.419393   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:01.419861   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:01.419882   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:01.420168   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:01.420331   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:26:01.421875   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:26:01.422160   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:01.422195   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:01.437017   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0919 19:26:01.437468   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:01.437893   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:01.437915   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:01.438300   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:01.438497   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:26:01.438639   29946 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.232
	I0919 19:26:01.438648   29946 certs.go:194] generating shared ca certs ...
	I0919 19:26:01.438661   29946 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:26:01.438777   29946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:26:01.438815   29946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:26:01.438824   29946 certs.go:256] generating profile certs ...
	I0919 19:26:01.438904   29946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:26:01.438934   29946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548
	I0919 19:26:01.438954   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.232 192.168.39.254]
	I0919 19:26:01.570629   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548 ...
	I0919 19:26:01.570661   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548: {Name:mk20c396761e9ccfefb28b7b4e5db83bbd0de404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:26:01.570827   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548 ...
	I0919 19:26:01.570840   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548: {Name:mkbba11c725a3524e5cbb6109330222760dc216a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:26:01.570911   29946 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:26:01.571040   29946 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:26:01.571164   29946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:26:01.571178   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:26:01.571191   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:26:01.571239   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:26:01.571263   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:26:01.571276   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:26:01.571286   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:26:01.571298   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:26:01.571308   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:26:01.571356   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:26:01.571390   29946 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:26:01.571399   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:26:01.571419   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:26:01.571441   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:26:01.571462   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:26:01.571500   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:26:01.571524   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:26:01.571538   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:01.571552   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:26:01.571582   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:26:01.574554   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:01.574961   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:26:01.574989   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:01.575190   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:26:01.575379   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:26:01.575503   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:26:01.575643   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:26:01.649555   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 19:26:01.654610   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 19:26:01.666818   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 19:26:01.670813   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 19:26:01.681979   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 19:26:01.686362   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 19:26:01.696685   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 19:26:01.700738   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 19:26:01.711578   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 19:26:01.715684   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 19:26:01.727402   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 19:26:01.731821   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 19:26:01.743441   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:26:01.772076   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:26:01.796535   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:26:01.821191   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:26:01.847148   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 19:26:01.871474   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 19:26:01.894939   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:26:01.918215   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:26:01.943385   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:26:01.968566   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:26:01.992928   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:26:02.017141   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 19:26:02.033989   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 19:26:02.051070   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 19:26:02.067651   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 19:26:02.084618   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 19:26:02.100924   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 19:26:02.117332   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 19:26:02.133574   29946 ssh_runner.go:195] Run: openssl version
	I0919 19:26:02.139079   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:26:02.149396   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:26:02.153709   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:26:02.153753   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:26:02.159372   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:26:02.169469   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:26:02.179773   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:26:02.184096   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:26:02.184140   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:26:02.189599   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:26:02.199935   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:26:02.210371   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:02.214711   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:02.214755   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:02.220241   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:26:02.230545   29946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:26:02.234717   29946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 19:26:02.234762   29946 kubeadm.go:934] updating node {m02 192.168.39.232 8443 v1.31.1 crio true true} ...
	I0919 19:26:02.234833   29946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:26:02.234855   29946 kube-vip.go:115] generating kube-vip config ...
	I0919 19:26:02.234882   29946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:26:02.250138   29946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:26:02.250208   29946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:26:02.250263   29946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:26:02.260294   29946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0919 19:26:02.260356   29946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0919 19:26:02.271123   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0919 19:26:02.271155   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:26:02.271170   29946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0919 19:26:02.271131   29946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0919 19:26:02.271252   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:26:02.275907   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0919 19:26:02.275932   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0919 19:26:04.726131   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:26:04.741861   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:26:04.741942   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:26:04.747080   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0919 19:26:04.747110   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0919 19:26:05.138782   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:26:05.138864   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:26:05.143906   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0919 19:26:05.143942   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0919 19:26:05.391094   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 19:26:05.402470   29946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 19:26:05.419083   29946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:26:05.435530   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:26:05.452330   29946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:26:05.456142   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:26:05.468600   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:26:05.590348   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:26:05.607783   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:26:05.608143   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:05.608190   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:05.622922   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I0919 19:26:05.623374   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:05.623806   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:05.623826   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:05.624115   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:05.624311   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:26:05.624422   29946 start.go:317] joinCluster: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:26:05.624512   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 19:26:05.624535   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:26:05.627671   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:05.628201   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:26:05.628231   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:05.628426   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:26:05.628584   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:26:05.628775   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:26:05.628963   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:26:05.783004   29946 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:26:05.783062   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k2rxz4.c60ygnjp1ja274y0 --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m02 --control-plane --apiserver-advertise-address=192.168.39.232 --apiserver-bind-port=8443"
	I0919 19:26:26.852036   29946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k2rxz4.c60ygnjp1ja274y0 --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m02 --control-plane --apiserver-advertise-address=192.168.39.232 --apiserver-bind-port=8443": (21.068945229s)
	I0919 19:26:26.852075   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 19:26:27.433951   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076992-m02 minikube.k8s.io/updated_at=2024_09_19T19_26_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=ha-076992 minikube.k8s.io/primary=false
	I0919 19:26:27.570431   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-076992-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 19:26:27.685911   29946 start.go:319] duration metric: took 22.061483301s to joinCluster
	I0919 19:26:27.685989   29946 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:26:27.686288   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:26:27.687539   29946 out.go:177] * Verifying Kubernetes components...
	I0919 19:26:27.689112   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:26:27.988894   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:26:28.006672   29946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:26:28.006924   29946 kapi.go:59] client config for ha-076992: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 19:26:28.006987   29946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.173:8443
	I0919 19:26:28.007186   29946 node_ready.go:35] waiting up to 6m0s for node "ha-076992-m02" to be "Ready" ...
	I0919 19:26:28.007293   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:28.007303   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:28.007314   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:28.007319   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:28.016756   29946 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0919 19:26:28.508333   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:28.508360   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:28.508372   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:28.508378   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:28.516049   29946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0919 19:26:29.007871   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:29.007898   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:29.007909   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:29.007913   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:29.011642   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:29.507413   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:29.507439   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:29.507447   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:29.507452   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:29.511660   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:30.007557   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:30.007578   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:30.007586   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:30.007591   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:30.011038   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:30.011598   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:30.508074   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:30.508099   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:30.508109   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:30.508112   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:30.511669   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:31.007638   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:31.007657   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:31.007665   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:31.007669   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:31.011418   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:31.507577   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:31.507605   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:31.507615   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:31.507626   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:31.511375   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:32.007718   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:32.007740   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:32.007749   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:32.007756   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:32.011650   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:32.012415   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:32.507637   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:32.507664   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:32.507676   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:32.507683   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:32.511755   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:33.008213   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:33.008234   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:33.008242   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:33.008246   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:33.011792   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:33.507684   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:33.507712   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:33.507720   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:33.507725   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:33.511853   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:34.007466   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:34.007488   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:34.007496   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:34.007500   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:34.012044   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:34.013001   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:34.508399   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:34.508419   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:34.508429   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:34.508434   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:34.512448   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:35.007796   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:35.007816   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:35.007824   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:35.007827   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:35.011062   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:35.508040   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:35.508073   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:35.508085   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:35.508091   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:35.511620   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:36.008049   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:36.008071   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:36.008079   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:36.008083   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:36.011403   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:36.508302   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:36.508324   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:36.508332   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:36.508337   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:36.511571   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:36.512300   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:37.007542   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:37.007564   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:37.007575   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:37.007582   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:37.011805   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:37.508050   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:37.508072   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:37.508080   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:37.508085   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:37.511538   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:38.007485   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:38.007511   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:38.007521   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:38.007533   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:38.011022   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:38.508063   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:38.508084   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:38.508092   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:38.508096   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:38.511492   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:39.008426   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:39.008451   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:39.008461   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:39.008467   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:39.012681   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:39.013788   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:39.508128   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:39.508151   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:39.508160   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:39.508165   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:39.512449   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:40.008306   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:40.008329   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:40.008337   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:40.008340   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:40.011906   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:40.508039   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:40.508061   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:40.508069   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:40.508074   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:40.511457   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:41.007677   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:41.007700   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:41.007709   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:41.007714   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:41.011506   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:41.507543   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:41.507564   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:41.507572   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:41.507578   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:41.510792   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:41.511569   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:42.008395   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:42.008418   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:42.008426   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:42.008430   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:42.011477   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:42.507458   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:42.507479   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:42.507487   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:42.507490   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:42.510874   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:43.008232   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:43.008255   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:43.008263   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:43.008266   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:43.011709   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:43.507746   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:43.507769   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:43.507778   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:43.507783   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:43.511265   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:43.511790   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:44.008252   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:44.008274   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:44.008284   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:44.008290   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:44.011544   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:44.507848   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:44.507875   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:44.507888   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:44.507894   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:44.510925   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:45.007953   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:45.007975   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:45.007983   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:45.007987   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:45.012020   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:45.508267   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:45.508293   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:45.508302   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:45.508309   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:45.512037   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:45.512623   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:46.008137   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.008158   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.008165   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.008169   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.012104   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.012731   29946 node_ready.go:49] node "ha-076992-m02" has status "Ready":"True"
	I0919 19:26:46.012750   29946 node_ready.go:38] duration metric: took 18.005542928s for node "ha-076992-m02" to be "Ready" ...
	I0919 19:26:46.012759   29946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:26:46.012828   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:46.012838   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.012845   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.012851   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.017898   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:26:46.023994   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.024066   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bst8x
	I0919 19:26:46.024075   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.024083   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.024087   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.027015   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.027716   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.027731   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.027738   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.027742   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.030392   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.030831   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.030846   29946 pod_ready.go:82] duration metric: took 6.831386ms for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.030853   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.030893   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nbds4
	I0919 19:26:46.030900   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.030907   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.030911   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.033599   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.034104   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.034116   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.034122   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.034125   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.036185   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.036561   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.036576   29946 pod_ready.go:82] duration metric: took 5.717406ms for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.036584   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.036632   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992
	I0919 19:26:46.036642   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.036649   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.036654   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.038980   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.039515   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.039526   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.039532   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.039535   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.041804   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.042161   29946 pod_ready.go:93] pod "etcd-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.042174   29946 pod_ready.go:82] duration metric: took 5.5845ms for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.042181   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.042226   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m02
	I0919 19:26:46.042236   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.042242   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.042247   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.044464   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.045049   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.045081   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.045091   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.045095   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.047141   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.047566   29946 pod_ready.go:93] pod "etcd-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.047579   29946 pod_ready.go:82] duration metric: took 5.393087ms for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.047590   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.208948   29946 request.go:632] Waited for 161.306549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:26:46.209021   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:26:46.209027   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.209035   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.209041   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.212646   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.408764   29946 request.go:632] Waited for 195.355169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.408850   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.408861   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.408869   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.408878   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.412302   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.412793   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.412809   29946 pod_ready.go:82] duration metric: took 365.213979ms for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.412818   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.609130   29946 request.go:632] Waited for 196.247315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:26:46.609190   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:26:46.609195   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.609203   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.609205   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.612762   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.808777   29946 request.go:632] Waited for 195.389035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.808839   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.808844   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.808851   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.808854   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.812076   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.812671   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.812690   29946 pod_ready.go:82] duration metric: took 399.865629ms for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.812701   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.008865   29946 request.go:632] Waited for 196.089609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:26:47.008926   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:26:47.008931   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.008940   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.008944   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.012069   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:47.208226   29946 request.go:632] Waited for 195.285225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:47.208310   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:47.208321   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.208333   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.208340   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.211658   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:47.212273   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:47.212334   29946 pod_ready.go:82] duration metric: took 399.616733ms for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.212376   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.408402   29946 request.go:632] Waited for 195.932577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:26:47.408471   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:26:47.408476   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.408483   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.408488   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.412589   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:47.608602   29946 request.go:632] Waited for 195.361457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:47.608664   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:47.608670   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.608677   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.608683   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.611901   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:47.612434   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:47.612461   29946 pod_ready.go:82] duration metric: took 400.073222ms for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.612471   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.808579   29946 request.go:632] Waited for 196.032947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:26:47.808639   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:26:47.808647   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.808656   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.808663   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.811981   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.009006   29946 request.go:632] Waited for 196.338909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.009055   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.009072   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.009080   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.009088   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.012721   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.013205   29946 pod_ready.go:93] pod "kube-proxy-4d8dc" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:48.013223   29946 pod_ready.go:82] duration metric: took 400.743363ms for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.013233   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.208239   29946 request.go:632] Waited for 194.931072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:26:48.208327   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:26:48.208336   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.208357   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.208367   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.211846   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.408960   29946 request.go:632] Waited for 196.372524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:48.409013   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:48.409018   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.409025   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.409030   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.412044   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:48.412602   29946 pod_ready.go:93] pod "kube-proxy-tjtfj" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:48.412619   29946 pod_ready.go:82] duration metric: took 399.379304ms for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.412628   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.608768   29946 request.go:632] Waited for 196.067805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:26:48.608847   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:26:48.608853   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.608860   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.608867   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.612031   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.809050   29946 request.go:632] Waited for 196.389681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.809131   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.809137   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.809146   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.809149   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.812475   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.813104   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:48.813123   29946 pod_ready.go:82] duration metric: took 400.488766ms for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.813133   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:49.009203   29946 request.go:632] Waited for 196.009229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:26:49.009276   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:26:49.009288   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.009300   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.009312   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.013885   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:49.208739   29946 request.go:632] Waited for 194.357315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:49.208808   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:49.208813   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.208822   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.208826   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.212311   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:49.212795   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:49.212813   29946 pod_ready.go:82] duration metric: took 399.67345ms for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:49.212826   29946 pod_ready.go:39] duration metric: took 3.200055081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:26:49.212844   29946 api_server.go:52] waiting for apiserver process to appear ...
	I0919 19:26:49.212896   29946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:26:49.228541   29946 api_server.go:72] duration metric: took 21.542513425s to wait for apiserver process to appear ...
	I0919 19:26:49.228570   29946 api_server.go:88] waiting for apiserver healthz status ...
	I0919 19:26:49.228591   29946 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0919 19:26:49.232969   29946 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0919 19:26:49.233025   29946 round_trippers.go:463] GET https://192.168.39.173:8443/version
	I0919 19:26:49.233033   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.233040   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.233048   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.234012   29946 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0919 19:26:49.234106   29946 api_server.go:141] control plane version: v1.31.1
	I0919 19:26:49.234128   29946 api_server.go:131] duration metric: took 5.550093ms to wait for apiserver health ...
	I0919 19:26:49.234140   29946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 19:26:49.408598   29946 request.go:632] Waited for 174.396795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.408664   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.408670   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.408680   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.408697   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.414220   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:26:49.419326   29946 system_pods.go:59] 17 kube-system pods found
	I0919 19:26:49.419355   29946 system_pods.go:61] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:26:49.419366   29946 system_pods.go:61] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:26:49.419370   29946 system_pods.go:61] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:26:49.419374   29946 system_pods.go:61] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:26:49.419377   29946 system_pods.go:61] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:26:49.419380   29946 system_pods.go:61] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:26:49.419384   29946 system_pods.go:61] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:26:49.419389   29946 system_pods.go:61] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:26:49.419392   29946 system_pods.go:61] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:26:49.419395   29946 system_pods.go:61] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:26:49.419398   29946 system_pods.go:61] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:26:49.419402   29946 system_pods.go:61] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:26:49.419408   29946 system_pods.go:61] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:26:49.419411   29946 system_pods.go:61] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:26:49.419415   29946 system_pods.go:61] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:26:49.419421   29946 system_pods.go:61] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:26:49.419423   29946 system_pods.go:61] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:26:49.419429   29946 system_pods.go:74] duration metric: took 185.281302ms to wait for pod list to return data ...
	I0919 19:26:49.419438   29946 default_sa.go:34] waiting for default service account to be created ...
	I0919 19:26:49.608712   29946 request.go:632] Waited for 189.201717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:26:49.608795   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:26:49.608802   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.608809   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.608814   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.612612   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:49.612816   29946 default_sa.go:45] found service account: "default"
	I0919 19:26:49.612834   29946 default_sa.go:55] duration metric: took 193.38871ms for default service account to be created ...
	I0919 19:26:49.612845   29946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 19:26:49.808242   29946 request.go:632] Waited for 195.299973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.808306   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.808313   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.808327   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.808332   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.812812   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:49.816942   29946 system_pods.go:86] 17 kube-system pods found
	I0919 19:26:49.816968   29946 system_pods.go:89] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:26:49.816974   29946 system_pods.go:89] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:26:49.816978   29946 system_pods.go:89] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:26:49.816982   29946 system_pods.go:89] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:26:49.816987   29946 system_pods.go:89] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:26:49.816990   29946 system_pods.go:89] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:26:49.816994   29946 system_pods.go:89] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:26:49.816997   29946 system_pods.go:89] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:26:49.817001   29946 system_pods.go:89] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:26:49.817006   29946 system_pods.go:89] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:26:49.817009   29946 system_pods.go:89] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:26:49.817012   29946 system_pods.go:89] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:26:49.817015   29946 system_pods.go:89] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:26:49.817018   29946 system_pods.go:89] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:26:49.817022   29946 system_pods.go:89] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:26:49.817025   29946 system_pods.go:89] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:26:49.817027   29946 system_pods.go:89] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:26:49.817033   29946 system_pods.go:126] duration metric: took 204.182134ms to wait for k8s-apps to be running ...
	I0919 19:26:49.817042   29946 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 19:26:49.817110   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:26:49.832907   29946 system_svc.go:56] duration metric: took 15.854427ms WaitForService to wait for kubelet
	I0919 19:26:49.832937   29946 kubeadm.go:582] duration metric: took 22.146916375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:26:49.832959   29946 node_conditions.go:102] verifying NodePressure condition ...
	I0919 19:26:50.008290   29946 request.go:632] Waited for 175.255303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes
	I0919 19:26:50.008370   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes
	I0919 19:26:50.008377   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:50.008395   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:50.008412   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:50.012639   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:50.013536   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:26:50.013563   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:26:50.013575   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:26:50.013578   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:26:50.013583   29946 node_conditions.go:105] duration metric: took 180.618254ms to run NodePressure ...
	I0919 19:26:50.013609   29946 start.go:241] waiting for startup goroutines ...
	I0919 19:26:50.013645   29946 start.go:255] writing updated cluster config ...
	I0919 19:26:50.016260   29946 out.go:201] 
	I0919 19:26:50.017506   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:26:50.017610   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:26:50.019348   29946 out.go:177] * Starting "ha-076992-m03" control-plane node in "ha-076992" cluster
	I0919 19:26:50.020726   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:26:50.020750   29946 cache.go:56] Caching tarball of preloaded images
	I0919 19:26:50.020859   29946 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:26:50.020870   29946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:26:50.020951   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:26:50.021276   29946 start.go:360] acquireMachinesLock for ha-076992-m03: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:26:50.021320   29946 start.go:364] duration metric: took 25.515µs to acquireMachinesLock for "ha-076992-m03"
	I0919 19:26:50.021340   29946 start.go:93] Provisioning new machine with config: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:26:50.021447   29946 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0919 19:26:50.023219   29946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 19:26:50.023316   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:50.023350   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:50.038933   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0919 19:26:50.039419   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:50.039936   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:50.039958   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:50.040292   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:50.040458   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:26:50.040592   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:26:50.040729   29946 start.go:159] libmachine.API.Create for "ha-076992" (driver="kvm2")
	I0919 19:26:50.040757   29946 client.go:168] LocalClient.Create starting
	I0919 19:26:50.040790   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem
	I0919 19:26:50.040824   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:26:50.040838   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:26:50.040886   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem
	I0919 19:26:50.040904   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:26:50.040914   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:26:50.040933   29946 main.go:141] libmachine: Running pre-create checks...
	I0919 19:26:50.040941   29946 main.go:141] libmachine: (ha-076992-m03) Calling .PreCreateCheck
	I0919 19:26:50.041191   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetConfigRaw
	I0919 19:26:50.041557   29946 main.go:141] libmachine: Creating machine...
	I0919 19:26:50.041570   29946 main.go:141] libmachine: (ha-076992-m03) Calling .Create
	I0919 19:26:50.041718   29946 main.go:141] libmachine: (ha-076992-m03) Creating KVM machine...
	I0919 19:26:50.042959   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found existing default KVM network
	I0919 19:26:50.043089   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found existing private KVM network mk-ha-076992
	I0919 19:26:50.043212   29946 main.go:141] libmachine: (ha-076992-m03) Setting up store path in /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03 ...
	I0919 19:26:50.043237   29946 main.go:141] libmachine: (ha-076992-m03) Building disk image from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 19:26:50.043301   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.043202   30696 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:26:50.043388   29946 main.go:141] libmachine: (ha-076992-m03) Downloading /home/jenkins/minikube-integration/19664-7917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0919 19:26:50.272805   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.272669   30696 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa...
	I0919 19:26:50.366932   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.366796   30696 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/ha-076992-m03.rawdisk...
	I0919 19:26:50.366967   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Writing magic tar header
	I0919 19:26:50.366980   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Writing SSH key tar header
	I0919 19:26:50.366998   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.366905   30696 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03 ...
	I0919 19:26:50.367013   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03
	I0919 19:26:50.367090   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03 (perms=drwx------)
	I0919 19:26:50.367125   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines (perms=drwxr-xr-x)
	I0919 19:26:50.367136   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines
	I0919 19:26:50.367162   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube (perms=drwxr-xr-x)
	I0919 19:26:50.367182   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:26:50.367196   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917 (perms=drwxrwxr-x)
	I0919 19:26:50.367208   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917
	I0919 19:26:50.367220   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 19:26:50.367228   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins
	I0919 19:26:50.367240   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home
	I0919 19:26:50.367249   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Skipping /home - not owner
	I0919 19:26:50.367259   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 19:26:50.367272   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 19:26:50.367282   29946 main.go:141] libmachine: (ha-076992-m03) Creating domain...
	I0919 19:26:50.368245   29946 main.go:141] libmachine: (ha-076992-m03) define libvirt domain using xml: 
	I0919 19:26:50.368263   29946 main.go:141] libmachine: (ha-076992-m03) <domain type='kvm'>
	I0919 19:26:50.368270   29946 main.go:141] libmachine: (ha-076992-m03)   <name>ha-076992-m03</name>
	I0919 19:26:50.368275   29946 main.go:141] libmachine: (ha-076992-m03)   <memory unit='MiB'>2200</memory>
	I0919 19:26:50.368280   29946 main.go:141] libmachine: (ha-076992-m03)   <vcpu>2</vcpu>
	I0919 19:26:50.368287   29946 main.go:141] libmachine: (ha-076992-m03)   <features>
	I0919 19:26:50.368314   29946 main.go:141] libmachine: (ha-076992-m03)     <acpi/>
	I0919 19:26:50.368335   29946 main.go:141] libmachine: (ha-076992-m03)     <apic/>
	I0919 19:26:50.368360   29946 main.go:141] libmachine: (ha-076992-m03)     <pae/>
	I0919 19:26:50.368384   29946 main.go:141] libmachine: (ha-076992-m03)     
	I0919 19:26:50.368405   29946 main.go:141] libmachine: (ha-076992-m03)   </features>
	I0919 19:26:50.368416   29946 main.go:141] libmachine: (ha-076992-m03)   <cpu mode='host-passthrough'>
	I0919 19:26:50.368427   29946 main.go:141] libmachine: (ha-076992-m03)   
	I0919 19:26:50.368434   29946 main.go:141] libmachine: (ha-076992-m03)   </cpu>
	I0919 19:26:50.368446   29946 main.go:141] libmachine: (ha-076992-m03)   <os>
	I0919 19:26:50.368453   29946 main.go:141] libmachine: (ha-076992-m03)     <type>hvm</type>
	I0919 19:26:50.368468   29946 main.go:141] libmachine: (ha-076992-m03)     <boot dev='cdrom'/>
	I0919 19:26:50.368486   29946 main.go:141] libmachine: (ha-076992-m03)     <boot dev='hd'/>
	I0919 19:26:50.368498   29946 main.go:141] libmachine: (ha-076992-m03)     <bootmenu enable='no'/>
	I0919 19:26:50.368507   29946 main.go:141] libmachine: (ha-076992-m03)   </os>
	I0919 19:26:50.368515   29946 main.go:141] libmachine: (ha-076992-m03)   <devices>
	I0919 19:26:50.368519   29946 main.go:141] libmachine: (ha-076992-m03)     <disk type='file' device='cdrom'>
	I0919 19:26:50.368529   29946 main.go:141] libmachine: (ha-076992-m03)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/boot2docker.iso'/>
	I0919 19:26:50.368538   29946 main.go:141] libmachine: (ha-076992-m03)       <target dev='hdc' bus='scsi'/>
	I0919 19:26:50.368548   29946 main.go:141] libmachine: (ha-076992-m03)       <readonly/>
	I0919 19:26:50.368562   29946 main.go:141] libmachine: (ha-076992-m03)     </disk>
	I0919 19:26:50.368574   29946 main.go:141] libmachine: (ha-076992-m03)     <disk type='file' device='disk'>
	I0919 19:26:50.368585   29946 main.go:141] libmachine: (ha-076992-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 19:26:50.368595   29946 main.go:141] libmachine: (ha-076992-m03)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/ha-076992-m03.rawdisk'/>
	I0919 19:26:50.368602   29946 main.go:141] libmachine: (ha-076992-m03)       <target dev='hda' bus='virtio'/>
	I0919 19:26:50.368606   29946 main.go:141] libmachine: (ha-076992-m03)     </disk>
	I0919 19:26:50.368613   29946 main.go:141] libmachine: (ha-076992-m03)     <interface type='network'>
	I0919 19:26:50.368618   29946 main.go:141] libmachine: (ha-076992-m03)       <source network='mk-ha-076992'/>
	I0919 19:26:50.368625   29946 main.go:141] libmachine: (ha-076992-m03)       <model type='virtio'/>
	I0919 19:26:50.368637   29946 main.go:141] libmachine: (ha-076992-m03)     </interface>
	I0919 19:26:50.368648   29946 main.go:141] libmachine: (ha-076992-m03)     <interface type='network'>
	I0919 19:26:50.368657   29946 main.go:141] libmachine: (ha-076992-m03)       <source network='default'/>
	I0919 19:26:50.368666   29946 main.go:141] libmachine: (ha-076992-m03)       <model type='virtio'/>
	I0919 19:26:50.368678   29946 main.go:141] libmachine: (ha-076992-m03)     </interface>
	I0919 19:26:50.368688   29946 main.go:141] libmachine: (ha-076992-m03)     <serial type='pty'>
	I0919 19:26:50.368694   29946 main.go:141] libmachine: (ha-076992-m03)       <target port='0'/>
	I0919 19:26:50.368700   29946 main.go:141] libmachine: (ha-076992-m03)     </serial>
	I0919 19:26:50.368705   29946 main.go:141] libmachine: (ha-076992-m03)     <console type='pty'>
	I0919 19:26:50.368713   29946 main.go:141] libmachine: (ha-076992-m03)       <target type='serial' port='0'/>
	I0919 19:26:50.368722   29946 main.go:141] libmachine: (ha-076992-m03)     </console>
	I0919 19:26:50.368736   29946 main.go:141] libmachine: (ha-076992-m03)     <rng model='virtio'>
	I0919 19:26:50.368755   29946 main.go:141] libmachine: (ha-076992-m03)       <backend model='random'>/dev/random</backend>
	I0919 19:26:50.368772   29946 main.go:141] libmachine: (ha-076992-m03)     </rng>
	I0919 19:26:50.368781   29946 main.go:141] libmachine: (ha-076992-m03)     
	I0919 19:26:50.368790   29946 main.go:141] libmachine: (ha-076992-m03)     
	I0919 19:26:50.368799   29946 main.go:141] libmachine: (ha-076992-m03)   </devices>
	I0919 19:26:50.368809   29946 main.go:141] libmachine: (ha-076992-m03) </domain>
	I0919 19:26:50.368819   29946 main.go:141] libmachine: (ha-076992-m03) 
	I0919 19:26:50.375827   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:e1:f4:70 in network default
	I0919 19:26:50.376416   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:50.376447   29946 main.go:141] libmachine: (ha-076992-m03) Ensuring networks are active...
	I0919 19:26:50.377119   29946 main.go:141] libmachine: (ha-076992-m03) Ensuring network default is active
	I0919 19:26:50.377451   29946 main.go:141] libmachine: (ha-076992-m03) Ensuring network mk-ha-076992 is active
	I0919 19:26:50.377904   29946 main.go:141] libmachine: (ha-076992-m03) Getting domain xml...
	I0919 19:26:50.378666   29946 main.go:141] libmachine: (ha-076992-m03) Creating domain...
	I0919 19:26:51.611728   29946 main.go:141] libmachine: (ha-076992-m03) Waiting to get IP...
	I0919 19:26:51.612561   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:51.612946   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:51.612965   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:51.612926   30696 retry.go:31] will retry after 229.04121ms: waiting for machine to come up
	I0919 19:26:51.843282   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:51.843786   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:51.843820   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:51.843734   30696 retry.go:31] will retry after 364.805682ms: waiting for machine to come up
	I0919 19:26:52.210136   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:52.210584   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:52.210610   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:52.210546   30696 retry.go:31] will retry after 345.198613ms: waiting for machine to come up
	I0919 19:26:52.556935   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:52.557405   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:52.557428   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:52.557338   30696 retry.go:31] will retry after 457.195059ms: waiting for machine to come up
	I0919 19:26:53.015946   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:53.016403   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:53.016423   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:53.016360   30696 retry.go:31] will retry after 743.82706ms: waiting for machine to come up
	I0919 19:26:53.762468   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:53.762847   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:53.762870   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:53.762817   30696 retry.go:31] will retry after 795.902123ms: waiting for machine to come up
	I0919 19:26:54.560380   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:54.560862   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:54.560884   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:54.560818   30696 retry.go:31] will retry after 723.847816ms: waiting for machine to come up
	I0919 19:26:55.285997   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:55.286544   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:55.286569   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:55.286475   30696 retry.go:31] will retry after 1.372100892s: waiting for machine to come up
	I0919 19:26:56.660980   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:56.661391   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:56.661417   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:56.661373   30696 retry.go:31] will retry after 1.303463786s: waiting for machine to come up
	I0919 19:26:57.966063   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:57.966500   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:57.966528   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:57.966449   30696 retry.go:31] will retry after 1.418881121s: waiting for machine to come up
	I0919 19:26:59.387181   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:59.387696   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:59.387727   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:59.387636   30696 retry.go:31] will retry after 2.01324992s: waiting for machine to come up
	I0919 19:27:01.402316   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:01.402776   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:27:01.402804   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:27:01.402729   30696 retry.go:31] will retry after 3.126162565s: waiting for machine to come up
	I0919 19:27:04.533132   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:04.533523   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:27:04.533546   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:27:04.533483   30696 retry.go:31] will retry after 3.645979241s: waiting for machine to come up
	I0919 19:27:08.184963   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:08.185442   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:27:08.185465   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:27:08.185392   30696 retry.go:31] will retry after 4.695577454s: waiting for machine to come up
	I0919 19:27:12.882164   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.882571   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has current primary IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.882589   29946 main.go:141] libmachine: (ha-076992-m03) Found IP for machine: 192.168.39.66
	I0919 19:27:12.882601   29946 main.go:141] libmachine: (ha-076992-m03) Reserving static IP address...
	I0919 19:27:12.882993   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find host DHCP lease matching {name: "ha-076992-m03", mac: "52:54:00:6a:be:a6", ip: "192.168.39.66"} in network mk-ha-076992
	I0919 19:27:12.954002   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Getting to WaitForSSH function...
	I0919 19:27:12.954035   29946 main.go:141] libmachine: (ha-076992-m03) Reserved static IP address: 192.168.39.66
	I0919 19:27:12.954075   29946 main.go:141] libmachine: (ha-076992-m03) Waiting for SSH to be available...
	I0919 19:27:12.956412   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.956840   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:12.956865   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.957025   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Using SSH client type: external
	I0919 19:27:12.957056   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa (-rw-------)
	I0919 19:27:12.957197   29946 main.go:141] libmachine: (ha-076992-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 19:27:12.957216   29946 main.go:141] libmachine: (ha-076992-m03) DBG | About to run SSH command:
	I0919 19:27:12.957228   29946 main.go:141] libmachine: (ha-076992-m03) DBG | exit 0
	I0919 19:27:13.081333   29946 main.go:141] libmachine: (ha-076992-m03) DBG | SSH cmd err, output: <nil>: 
	I0919 19:27:13.081616   29946 main.go:141] libmachine: (ha-076992-m03) KVM machine creation complete!
	I0919 19:27:13.081958   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetConfigRaw
	I0919 19:27:13.082498   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:13.082685   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:13.082851   29946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 19:27:13.082866   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetState
	I0919 19:27:13.084230   29946 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 19:27:13.084246   29946 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 19:27:13.084253   29946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 19:27:13.084261   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.086332   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.086661   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.086683   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.086775   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.086955   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.087082   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.087204   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.087369   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.087586   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.087601   29946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 19:27:13.188711   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:27:13.188735   29946 main.go:141] libmachine: Detecting the provisioner...
	I0919 19:27:13.188748   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.191413   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.191717   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.191744   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.191916   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.192073   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.192197   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.192317   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.192502   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.192705   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.192716   29946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 19:27:13.293829   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0919 19:27:13.293892   29946 main.go:141] libmachine: found compatible host: buildroot
	I0919 19:27:13.293901   29946 main.go:141] libmachine: Provisioning with buildroot...
	I0919 19:27:13.293911   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:27:13.294179   29946 buildroot.go:166] provisioning hostname "ha-076992-m03"
	I0919 19:27:13.294206   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:27:13.294379   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.297332   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.297705   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.297731   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.297878   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.298121   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.298268   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.298407   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.298593   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.298797   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.298812   29946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992-m03 && echo "ha-076992-m03" | sudo tee /etc/hostname
	I0919 19:27:13.417925   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992-m03
	
	I0919 19:27:13.417953   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.421043   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.421515   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.421544   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.421759   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.421977   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.422158   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.422267   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.422417   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.422625   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.422650   29946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:27:13.534273   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:27:13.534305   29946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:27:13.534319   29946 buildroot.go:174] setting up certificates
	I0919 19:27:13.534328   29946 provision.go:84] configureAuth start
	I0919 19:27:13.534336   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:27:13.534593   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:13.536896   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.537258   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.537285   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.537378   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.539354   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.539732   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.539755   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.539949   29946 provision.go:143] copyHostCerts
	I0919 19:27:13.539973   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:27:13.540002   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:27:13.540010   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:27:13.540074   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:27:13.540169   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:27:13.540188   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:27:13.540192   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:27:13.540218   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:27:13.540272   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:27:13.540289   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:27:13.540295   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:27:13.540317   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:27:13.540366   29946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992-m03 san=[127.0.0.1 192.168.39.66 ha-076992-m03 localhost minikube]
	I0919 19:27:13.664258   29946 provision.go:177] copyRemoteCerts
	I0919 19:27:13.664317   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:27:13.664340   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.666694   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.666972   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.667004   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.667138   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.667349   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.667524   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.667655   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:13.747501   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:27:13.747575   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:27:13.775047   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:27:13.775117   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 19:27:13.799961   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:27:13.800042   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:27:13.824466   29946 provision.go:87] duration metric: took 290.126442ms to configureAuth
	I0919 19:27:13.824491   29946 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:27:13.824710   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:27:13.824790   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.827490   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.827892   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.827922   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.828063   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.828244   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.828410   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.828560   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.828704   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.828855   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.828868   29946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:27:14.055519   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:27:14.055549   29946 main.go:141] libmachine: Checking connection to Docker...
	I0919 19:27:14.055560   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetURL
	I0919 19:27:14.056949   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Using libvirt version 6000000
	I0919 19:27:14.059445   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.059710   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.059746   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.059910   29946 main.go:141] libmachine: Docker is up and running!
	I0919 19:27:14.059934   29946 main.go:141] libmachine: Reticulating splines...
	I0919 19:27:14.059941   29946 client.go:171] duration metric: took 24.019173404s to LocalClient.Create
	I0919 19:27:14.059965   29946 start.go:167] duration metric: took 24.019236466s to libmachine.API.Create "ha-076992"
	I0919 19:27:14.059975   29946 start.go:293] postStartSetup for "ha-076992-m03" (driver="kvm2")
	I0919 19:27:14.059989   29946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:27:14.060019   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.060324   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:27:14.060351   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:14.062476   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.062770   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.062797   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.062880   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.063087   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.063268   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.063425   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:14.148901   29946 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:27:14.153351   29946 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:27:14.153376   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:27:14.153447   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:27:14.153516   29946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:27:14.153525   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:27:14.153603   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:27:14.163847   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:27:14.190891   29946 start.go:296] duration metric: took 130.895498ms for postStartSetup
	I0919 19:27:14.190969   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetConfigRaw
	I0919 19:27:14.191591   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:14.194303   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.194676   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.194706   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.195041   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:27:14.195249   29946 start.go:128] duration metric: took 24.173788829s to createHost
	I0919 19:27:14.195296   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:14.197299   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.197596   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.197621   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.197722   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.197880   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.197999   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.198111   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.198242   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:14.198397   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:14.198407   29946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:27:14.302149   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726774034.280175121
	
	I0919 19:27:14.302173   29946 fix.go:216] guest clock: 1726774034.280175121
	I0919 19:27:14.302181   29946 fix.go:229] Guest: 2024-09-19 19:27:14.280175121 +0000 UTC Remote: 2024-09-19 19:27:14.195262057 +0000 UTC m=+143.681298720 (delta=84.913064ms)
	I0919 19:27:14.302206   29946 fix.go:200] guest clock delta is within tolerance: 84.913064ms
	I0919 19:27:14.302210   29946 start.go:83] releasing machines lock for "ha-076992-m03", held for 24.280882386s
	I0919 19:27:14.302236   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.302488   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:14.305506   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.305858   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.305888   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.308327   29946 out.go:177] * Found network options:
	I0919 19:27:14.309814   29946 out.go:177]   - NO_PROXY=192.168.39.173,192.168.39.232
	W0919 19:27:14.311323   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 19:27:14.311345   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:27:14.311387   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.311977   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.312171   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.312284   29946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:27:14.312326   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	W0919 19:27:14.312356   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 19:27:14.312379   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:27:14.312445   29946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:27:14.312467   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:14.315326   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315477   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315739   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.315765   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315795   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.315810   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315916   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.316063   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.316081   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.316266   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.316269   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.316443   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:14.316458   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.316594   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:14.552647   29946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:27:14.559427   29946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:27:14.559487   29946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:27:14.575890   29946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 19:27:14.575920   29946 start.go:495] detecting cgroup driver to use...
	I0919 19:27:14.575983   29946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:27:14.591936   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:27:14.606858   29946 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:27:14.606921   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:27:14.621450   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:27:14.635364   29946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:27:14.756131   29946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:27:14.907154   29946 docker.go:233] disabling docker service ...
	I0919 19:27:14.907243   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:27:14.923366   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:27:14.936588   29946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:27:15.078676   29946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:27:15.198104   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:27:15.212919   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:27:15.232314   29946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:27:15.232376   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.242884   29946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:27:15.242957   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.253165   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.263320   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.273801   29946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:27:15.284463   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.296688   29946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.314869   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.327156   29946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:27:15.338349   29946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 19:27:15.338412   29946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 19:27:15.353775   29946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:27:15.365059   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:27:15.499190   29946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:27:15.590064   29946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:27:15.590148   29946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:27:15.595200   29946 start.go:563] Will wait 60s for crictl version
	I0919 19:27:15.595269   29946 ssh_runner.go:195] Run: which crictl
	I0919 19:27:15.599029   29946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:27:15.640263   29946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:27:15.640356   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:27:15.670621   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:27:15.702613   29946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:27:15.703947   29946 out.go:177]   - env NO_PROXY=192.168.39.173
	I0919 19:27:15.705240   29946 out.go:177]   - env NO_PROXY=192.168.39.173,192.168.39.232
	I0919 19:27:15.706651   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:15.709234   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:15.709551   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:15.709578   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:15.709744   29946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:27:15.714032   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:27:15.727732   29946 mustload.go:65] Loading cluster: ha-076992
	I0919 19:27:15.727996   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:27:15.728332   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:27:15.728377   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:27:15.743011   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0919 19:27:15.743384   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:27:15.743811   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:27:15.743832   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:27:15.744550   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:27:15.744751   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:27:15.746453   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:27:15.746740   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:27:15.746776   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:27:15.761958   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0919 19:27:15.762454   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:27:15.762899   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:27:15.762916   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:27:15.763265   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:27:15.763475   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:27:15.763629   29946 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.66
	I0919 19:27:15.763640   29946 certs.go:194] generating shared ca certs ...
	I0919 19:27:15.763657   29946 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:27:15.763802   29946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:27:15.763861   29946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:27:15.763874   29946 certs.go:256] generating profile certs ...
	I0919 19:27:15.763968   29946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:27:15.764001   29946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430
	I0919 19:27:15.764017   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.232 192.168.39.66 192.168.39.254]
	I0919 19:27:15.897451   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430 ...
	I0919 19:27:15.897480   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430: {Name:mk8beb13cebda88770e8cb2f4d651fd5a45e954c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:27:15.897644   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430 ...
	I0919 19:27:15.897655   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430: {Name:mkcd8cc84233dc653483e6e6401ec1c9f04025cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:27:15.897721   29946 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:27:15.897848   29946 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:27:15.897973   29946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:27:15.897988   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:27:15.898003   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:27:15.898016   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:27:15.898028   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:27:15.898040   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:27:15.898054   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:27:15.898066   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:27:15.913133   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:27:15.913210   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:27:15.913259   29946 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:27:15.913269   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:27:15.913290   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:27:15.913314   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:27:15.913334   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:27:15.913371   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:27:15.913402   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:27:15.913413   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:15.913423   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:27:15.913453   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:27:15.916526   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:15.916928   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:27:15.916951   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:15.917154   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:27:15.917364   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:27:15.917522   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:27:15.917642   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:27:15.989416   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 19:27:15.994763   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 19:27:16.006209   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 19:27:16.010673   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 19:27:16.021439   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 19:27:16.026004   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 19:27:16.036773   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 19:27:16.041211   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 19:27:16.051440   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 19:27:16.055788   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 19:27:16.066035   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 19:27:16.071009   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 19:27:16.081291   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:27:16.106933   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:27:16.131578   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:27:16.154733   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:27:16.178142   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 19:27:16.203131   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 19:27:16.231577   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:27:16.258783   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:27:16.282643   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:27:16.307319   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:27:16.330802   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:27:16.354835   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 19:27:16.371768   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 19:27:16.387527   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 19:27:16.403635   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 19:27:16.419535   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 19:27:16.437605   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 19:27:16.453718   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 19:27:16.470564   29946 ssh_runner.go:195] Run: openssl version
	I0919 19:27:16.476297   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:27:16.486813   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:27:16.491276   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:27:16.491323   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:27:16.496992   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:27:16.507732   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:27:16.518539   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:16.523068   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:16.523123   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:16.528612   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:27:16.539667   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:27:16.550474   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:27:16.555341   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:27:16.555413   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:27:16.561228   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:27:16.572802   29946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:27:16.577025   29946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 19:27:16.577096   29946 kubeadm.go:934] updating node {m03 192.168.39.66 8443 v1.31.1 crio true true} ...
	I0919 19:27:16.577177   29946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:27:16.577201   29946 kube-vip.go:115] generating kube-vip config ...
	I0919 19:27:16.577231   29946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:27:16.595588   29946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:27:16.595653   29946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:27:16.595722   29946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:27:16.605668   29946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0919 19:27:16.605728   29946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0919 19:27:16.615281   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0919 19:27:16.615305   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:27:16.615306   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0919 19:27:16.615328   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:27:16.615349   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:27:16.615354   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0919 19:27:16.615388   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:27:16.615397   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:27:16.623586   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0919 19:27:16.623626   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0919 19:27:16.623772   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0919 19:27:16.623799   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0919 19:27:16.636164   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:27:16.636292   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:27:16.736519   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0919 19:27:16.736558   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0919 19:27:17.474932   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 19:27:17.484832   29946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 19:27:17.501777   29946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:27:17.518686   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:27:17.535414   29946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:27:17.539429   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:27:17.552345   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:27:17.687800   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:27:17.706912   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:27:17.707271   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:27:17.707332   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:27:17.723234   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46531
	I0919 19:27:17.723773   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:27:17.724317   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:27:17.724344   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:27:17.724711   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:27:17.724916   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:27:17.725046   29946 start.go:317] joinCluster: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:27:17.725198   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 19:27:17.725213   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:27:17.728260   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:17.728743   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:27:17.728764   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:17.728933   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:27:17.729087   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:27:17.729233   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:27:17.729362   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:27:17.893938   29946 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:27:17.893987   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nfzvhu.osmpbokubpd9m5ji --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m03 --control-plane --apiserver-advertise-address=192.168.39.66 --apiserver-bind-port=8443"
	I0919 19:27:40.045829   29946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nfzvhu.osmpbokubpd9m5ji --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m03 --control-plane --apiserver-advertise-address=192.168.39.66 --apiserver-bind-port=8443": (22.151818373s)
	I0919 19:27:40.045864   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 19:27:40.606802   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076992-m03 minikube.k8s.io/updated_at=2024_09_19T19_27_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=ha-076992 minikube.k8s.io/primary=false
	I0919 19:27:40.720562   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-076992-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 19:27:40.852305   29946 start.go:319] duration metric: took 23.127257351s to joinCluster
	I0919 19:27:40.852371   29946 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:27:40.852725   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:27:40.853772   29946 out.go:177] * Verifying Kubernetes components...
	I0919 19:27:40.855055   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:27:41.140593   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:27:41.167178   29946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:27:41.167526   29946 kapi.go:59] client config for ha-076992: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 19:27:41.167609   29946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.173:8443
	I0919 19:27:41.167883   29946 node_ready.go:35] waiting up to 6m0s for node "ha-076992-m03" to be "Ready" ...
	I0919 19:27:41.167964   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:41.167975   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:41.167986   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:41.167992   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:41.171312   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:41.668093   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:41.668122   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:41.668136   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:41.668145   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:41.671847   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:42.169049   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:42.169078   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:42.169089   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:42.169097   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:42.173253   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:42.668124   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:42.668154   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:42.668165   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:42.668172   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:42.671705   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:43.169071   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:43.169099   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:43.169111   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:43.169119   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:43.172988   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:43.173723   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:43.668069   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:43.668090   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:43.668098   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:43.668102   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:43.671379   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:44.168189   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:44.168213   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:44.168224   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:44.168232   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:44.172163   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:44.668238   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:44.668263   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:44.668292   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:44.668300   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:44.672297   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:45.168809   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:45.168914   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:45.168943   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:45.168952   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:45.172818   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:45.668795   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:45.668819   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:45.668829   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:45.668833   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:45.672833   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:45.673726   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:46.168145   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:46.168176   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:46.168188   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:46.168195   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:46.171541   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:46.669018   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:46.669043   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:46.669053   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:46.669058   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:46.672077   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:47.168070   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:47.168095   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:47.168106   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:47.168112   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:47.171091   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:47.668131   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:47.668156   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:47.668167   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:47.668173   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:47.671585   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:48.168035   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:48.168054   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:48.168066   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:48.168071   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:48.172365   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:48.172854   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:48.668232   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:48.668261   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:48.668269   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:48.668273   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:48.671672   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:49.168763   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:49.168784   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:49.168792   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:49.168796   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:49.172225   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:49.668291   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:49.668312   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:49.668319   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:49.668323   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:49.671622   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:50.168990   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:50.169014   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:50.169023   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:50.169028   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:50.172111   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:50.668480   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:50.668500   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:50.668508   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:50.668514   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:50.672693   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:50.673442   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:51.168845   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:51.168870   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:51.168883   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:51.168896   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:51.172225   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:51.668471   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:51.668494   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:51.668505   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:51.668510   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:51.672549   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:52.168467   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:52.168490   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:52.168499   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:52.168502   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:52.172284   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:52.668300   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:52.668325   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:52.668337   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:52.668345   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:52.671626   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:53.168043   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:53.168066   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:53.168076   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:53.168082   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:53.171507   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:53.172186   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:53.668508   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:53.668530   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:53.668539   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:53.668544   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:53.674065   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:54.169042   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:54.169081   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:54.169093   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:54.169101   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:54.172484   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:54.668693   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:54.668716   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:54.668724   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:54.668728   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:54.671712   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:55.168811   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:55.168838   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:55.168850   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:55.168856   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:55.171986   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:55.172564   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:55.669027   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:55.669049   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:55.669060   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:55.669116   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:55.674537   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:56.168644   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:56.168667   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:56.168674   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:56.168677   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:56.172061   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:56.669121   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:56.669152   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:56.669164   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:56.669170   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:56.672708   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:57.168818   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:57.168844   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:57.168856   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:57.168865   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:57.172258   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:57.172846   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:57.668135   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:57.668158   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:57.668169   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:57.668174   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:57.671424   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:58.168923   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:58.168945   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:58.168953   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:58.168956   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:58.172623   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:58.668685   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:58.668705   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:58.668713   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:58.668717   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:58.671912   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.168858   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:59.168880   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.168889   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.168892   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.171841   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.172469   29946 node_ready.go:49] node "ha-076992-m03" has status "Ready":"True"
	I0919 19:27:59.172488   29946 node_ready.go:38] duration metric: took 18.004586894s for node "ha-076992-m03" to be "Ready" ...
	I0919 19:27:59.172499   29946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:27:59.172582   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:27:59.172595   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.172604   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.172609   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.178464   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:59.185406   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.185497   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bst8x
	I0919 19:27:59.185507   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.185518   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.185526   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.188442   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.189103   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.189120   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.189130   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.189136   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.191329   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.191851   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.191866   29946 pod_ready.go:82] duration metric: took 6.439364ms for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.191873   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.191928   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nbds4
	I0919 19:27:59.191937   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.191944   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.191948   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.194394   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.195009   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.195025   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.195031   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.195035   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.197517   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.198256   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.198270   29946 pod_ready.go:82] duration metric: took 6.390833ms for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.198278   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.198317   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992
	I0919 19:27:59.198324   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.198331   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.198336   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.200499   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.201171   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.201184   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.201190   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.201201   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.203402   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.203953   29946 pod_ready.go:93] pod "etcd-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.203973   29946 pod_ready.go:82] duration metric: took 5.68948ms for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.203984   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.204042   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m02
	I0919 19:27:59.204053   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.204062   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.204073   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.206409   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.207206   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:27:59.207225   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.207234   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.207242   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.209682   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.210215   29946 pod_ready.go:93] pod "etcd-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.210231   29946 pod_ready.go:82] duration metric: took 6.235966ms for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.210241   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.369687   29946 request.go:632] Waited for 159.345593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m03
	I0919 19:27:59.369758   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m03
	I0919 19:27:59.369768   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.369776   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.369782   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.373326   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.569343   29946 request.go:632] Waited for 195.374141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:59.569427   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:59.569435   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.569444   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.569454   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.572773   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.573760   29946 pod_ready.go:93] pod "etcd-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.573784   29946 pod_ready.go:82] duration metric: took 363.534844ms for pod "etcd-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.573804   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.769848   29946 request.go:632] Waited for 195.964398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:27:59.769916   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:27:59.769924   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.769941   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.769951   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.773613   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.969692   29946 request.go:632] Waited for 195.271169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.969763   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.969771   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.969782   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.969790   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.975454   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:59.976399   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.976419   29946 pod_ready.go:82] duration metric: took 402.608428ms for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.976442   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.169862   29946 request.go:632] Waited for 193.313777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:28:00.169932   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:28:00.169948   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.169963   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.169971   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.173456   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.369679   29946 request.go:632] Waited for 195.364808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:00.369746   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:00.369757   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.369769   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.369777   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.373078   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.373725   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:00.373745   29946 pod_ready.go:82] duration metric: took 397.293364ms for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.373754   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.569238   29946 request.go:632] Waited for 195.416262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m03
	I0919 19:28:00.569304   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m03
	I0919 19:28:00.569310   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.569317   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.569325   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.572712   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.769839   29946 request.go:632] Waited for 196.213847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:00.769902   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:00.769909   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.769916   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.769925   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.773054   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.773595   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:00.773611   29946 pod_ready.go:82] duration metric: took 399.848276ms for pod "kube-apiserver-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.773623   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.969813   29946 request.go:632] Waited for 196.102797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:28:00.969866   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:28:00.969871   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.969878   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.969883   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.978905   29946 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0919 19:28:01.169966   29946 request.go:632] Waited for 190.375143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:01.170066   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:01.170080   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.170090   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.170095   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.173733   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.174395   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:01.174419   29946 pod_ready.go:82] duration metric: took 400.786244ms for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.174431   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.369465   29946 request.go:632] Waited for 194.942354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:28:01.369536   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:28:01.369546   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.369559   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.369570   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.373178   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.569830   29946 request.go:632] Waited for 195.884004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:01.569887   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:01.569894   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.569906   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.569911   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.573021   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.573575   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:01.573597   29946 pod_ready.go:82] duration metric: took 399.158503ms for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.573610   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.769720   29946 request.go:632] Waited for 196.039819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m03
	I0919 19:28:01.769796   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m03
	I0919 19:28:01.769804   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.769815   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.769863   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.773496   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.969679   29946 request.go:632] Waited for 195.366002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:01.969751   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:01.969759   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.969770   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.969778   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.973411   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.973966   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:01.973986   29946 pod_ready.go:82] duration metric: took 400.368344ms for pod "kube-controller-manager-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.973999   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.169159   29946 request.go:632] Waited for 195.067817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:28:02.169233   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:28:02.169240   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.169249   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.169255   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.172645   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:02.369743   29946 request.go:632] Waited for 196.39611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:02.369834   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:02.369848   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.369859   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.369869   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.372902   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:02.373658   29946 pod_ready.go:93] pod "kube-proxy-4d8dc" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:02.373679   29946 pod_ready.go:82] duration metric: took 399.671379ms for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.373695   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qxzr" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.569759   29946 request.go:632] Waited for 195.99907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qxzr
	I0919 19:28:02.569828   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qxzr
	I0919 19:28:02.569835   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.569845   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.569850   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.573245   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:02.769286   29946 request.go:632] Waited for 195.311639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:02.769401   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:02.769411   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.769421   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.769429   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.774902   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:28:02.775546   29946 pod_ready.go:93] pod "kube-proxy-4qxzr" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:02.775569   29946 pod_ready.go:82] duration metric: took 401.866343ms for pod "kube-proxy-4qxzr" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.775582   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.969688   29946 request.go:632] Waited for 194.028715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:28:02.969782   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:28:02.969793   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.969804   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.969814   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.973511   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.169667   29946 request.go:632] Waited for 195.362144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.169732   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.169740   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.169750   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.169759   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.173206   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.173751   29946 pod_ready.go:93] pod "kube-proxy-tjtfj" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:03.173769   29946 pod_ready.go:82] duration metric: took 398.180461ms for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.173777   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.369899   29946 request.go:632] Waited for 196.051119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:28:03.370000   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:28:03.370008   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.370019   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.370028   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.373045   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:28:03.569018   29946 request.go:632] Waited for 195.269584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:03.569098   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:03.569104   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.569111   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.569117   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.572980   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.573818   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:03.573842   29946 pod_ready.go:82] duration metric: took 400.056994ms for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.573856   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.768884   29946 request.go:632] Waited for 194.957925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:28:03.768975   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:28:03.768982   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.768989   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.768994   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.772280   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.969113   29946 request.go:632] Waited for 196.276201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.969173   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.969181   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.969192   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.969201   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.972689   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.973513   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:03.973536   29946 pod_ready.go:82] duration metric: took 399.670878ms for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.973550   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:04.169664   29946 request.go:632] Waited for 196.044338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m03
	I0919 19:28:04.169768   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m03
	I0919 19:28:04.169779   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.169790   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.169795   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.173604   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:04.369491   29946 request.go:632] Waited for 195.428121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:04.369586   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:04.369594   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.369605   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.369611   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.373358   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:04.373807   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:04.373827   29946 pod_ready.go:82] duration metric: took 400.269116ms for pod "kube-scheduler-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:04.373841   29946 pod_ready.go:39] duration metric: took 5.201326396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:28:04.373868   29946 api_server.go:52] waiting for apiserver process to appear ...
	I0919 19:28:04.373935   29946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:28:04.390528   29946 api_server.go:72] duration metric: took 23.538119441s to wait for apiserver process to appear ...
	I0919 19:28:04.390551   29946 api_server.go:88] waiting for apiserver healthz status ...
	I0919 19:28:04.390571   29946 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0919 19:28:04.396791   29946 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0919 19:28:04.396862   29946 round_trippers.go:463] GET https://192.168.39.173:8443/version
	I0919 19:28:04.396873   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.396882   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.396889   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.397946   29946 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 19:28:04.398142   29946 api_server.go:141] control plane version: v1.31.1
	I0919 19:28:04.398162   29946 api_server.go:131] duration metric: took 7.603365ms to wait for apiserver health ...
	I0919 19:28:04.398171   29946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 19:28:04.569591   29946 request.go:632] Waited for 171.340636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.569649   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.569654   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.569661   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.569665   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.575663   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:28:04.582592   29946 system_pods.go:59] 24 kube-system pods found
	I0919 19:28:04.582629   29946 system_pods.go:61] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:28:04.582636   29946 system_pods.go:61] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:28:04.582641   29946 system_pods.go:61] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:28:04.582646   29946 system_pods.go:61] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:28:04.582651   29946 system_pods.go:61] "etcd-ha-076992-m03" [2cb8094f-2857-49e8-a740-58c09de52bb5] Running
	I0919 19:28:04.582656   29946 system_pods.go:61] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:28:04.582660   29946 system_pods.go:61] "kindnet-89gmh" [696397d5-76c4-4565-9baa-042392bc74c8] Running
	I0919 19:28:04.582665   29946 system_pods.go:61] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:28:04.582670   29946 system_pods.go:61] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:28:04.582674   29946 system_pods.go:61] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:28:04.582679   29946 system_pods.go:61] "kube-apiserver-ha-076992-m03" [7ada8b62-958d-4bbf-9b60-4f2f8738e864] Running
	I0919 19:28:04.582685   29946 system_pods.go:61] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:28:04.582696   29946 system_pods.go:61] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:28:04.582705   29946 system_pods.go:61] "kube-controller-manager-ha-076992-m03" [b12ed136-a047-45cc-966f-fdbb624ee027] Running
	I0919 19:28:04.582710   29946 system_pods.go:61] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:28:04.582715   29946 system_pods.go:61] "kube-proxy-4qxzr" [91b8da75-fb68-4cfe-b463-5f4ce57a9fbc] Running
	I0919 19:28:04.582719   29946 system_pods.go:61] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:28:04.582722   29946 system_pods.go:61] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:28:04.582725   29946 system_pods.go:61] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:28:04.582729   29946 system_pods.go:61] "kube-scheduler-ha-076992-m03" [7b69ed21-49ee-47d0-add2-83b93f61b3cf] Running
	I0919 19:28:04.582732   29946 system_pods.go:61] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:28:04.582735   29946 system_pods.go:61] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:28:04.582738   29946 system_pods.go:61] "kube-vip-ha-076992-m03" [8e4ad9ad-38d3-4189-8ea9-16a7e8f87f08] Running
	I0919 19:28:04.582741   29946 system_pods.go:61] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:28:04.582746   29946 system_pods.go:74] duration metric: took 184.569532ms to wait for pod list to return data ...
	I0919 19:28:04.582762   29946 default_sa.go:34] waiting for default service account to be created ...
	I0919 19:28:04.769178   29946 request.go:632] Waited for 186.318811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:28:04.769251   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:28:04.769259   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.769269   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.769302   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.773568   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:28:04.773707   29946 default_sa.go:45] found service account: "default"
	I0919 19:28:04.773726   29946 default_sa.go:55] duration metric: took 190.956992ms for default service account to be created ...
	I0919 19:28:04.773736   29946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 19:28:04.968965   29946 request.go:632] Waited for 195.155154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.969039   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.969056   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.969099   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.969108   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.974937   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:28:04.983584   29946 system_pods.go:86] 24 kube-system pods found
	I0919 19:28:04.983617   29946 system_pods.go:89] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:28:04.983625   29946 system_pods.go:89] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:28:04.983629   29946 system_pods.go:89] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:28:04.983633   29946 system_pods.go:89] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:28:04.983637   29946 system_pods.go:89] "etcd-ha-076992-m03" [2cb8094f-2857-49e8-a740-58c09de52bb5] Running
	I0919 19:28:04.983641   29946 system_pods.go:89] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:28:04.983645   29946 system_pods.go:89] "kindnet-89gmh" [696397d5-76c4-4565-9baa-042392bc74c8] Running
	I0919 19:28:04.983648   29946 system_pods.go:89] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:28:04.983652   29946 system_pods.go:89] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:28:04.983656   29946 system_pods.go:89] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:28:04.983659   29946 system_pods.go:89] "kube-apiserver-ha-076992-m03" [7ada8b62-958d-4bbf-9b60-4f2f8738e864] Running
	I0919 19:28:04.983663   29946 system_pods.go:89] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:28:04.983667   29946 system_pods.go:89] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:28:04.983670   29946 system_pods.go:89] "kube-controller-manager-ha-076992-m03" [b12ed136-a047-45cc-966f-fdbb624ee027] Running
	I0919 19:28:04.983674   29946 system_pods.go:89] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:28:04.983677   29946 system_pods.go:89] "kube-proxy-4qxzr" [91b8da75-fb68-4cfe-b463-5f4ce57a9fbc] Running
	I0919 19:28:04.983680   29946 system_pods.go:89] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:28:04.983683   29946 system_pods.go:89] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:28:04.983687   29946 system_pods.go:89] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:28:04.983691   29946 system_pods.go:89] "kube-scheduler-ha-076992-m03" [7b69ed21-49ee-47d0-add2-83b93f61b3cf] Running
	I0919 19:28:04.983694   29946 system_pods.go:89] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:28:04.983697   29946 system_pods.go:89] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:28:04.983708   29946 system_pods.go:89] "kube-vip-ha-076992-m03" [8e4ad9ad-38d3-4189-8ea9-16a7e8f87f08] Running
	I0919 19:28:04.983714   29946 system_pods.go:89] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:28:04.983719   29946 system_pods.go:126] duration metric: took 209.976345ms to wait for k8s-apps to be running ...
	I0919 19:28:04.983728   29946 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 19:28:04.983768   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:28:05.000249   29946 system_svc.go:56] duration metric: took 16.508734ms WaitForService to wait for kubelet
	I0919 19:28:05.000280   29946 kubeadm.go:582] duration metric: took 24.147874151s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:28:05.000306   29946 node_conditions.go:102] verifying NodePressure condition ...
	I0919 19:28:05.168981   29946 request.go:632] Waited for 168.596869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes
	I0919 19:28:05.169036   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes
	I0919 19:28:05.169043   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:05.169052   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:05.169059   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:05.172968   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:05.174140   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:28:05.174163   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:28:05.174173   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:28:05.174177   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:28:05.174180   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:28:05.174183   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:28:05.174187   29946 node_conditions.go:105] duration metric: took 173.877315ms to run NodePressure ...
	I0919 19:28:05.174197   29946 start.go:241] waiting for startup goroutines ...
	I0919 19:28:05.174217   29946 start.go:255] writing updated cluster config ...
	I0919 19:28:05.174491   29946 ssh_runner.go:195] Run: rm -f paused
	I0919 19:28:05.224162   29946 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 19:28:05.226313   29946 out.go:177] * Done! kubectl is now configured to use "ha-076992" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.235677715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774304235652641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b11a1d6-aae1-427b-a48b-77a2dbf48991 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.236149110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a15fde25-bb7f-4d93-8e33-208ff13a5c72 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.236225316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a15fde25-bb7f-4d93-8e33-208ff13a5c72 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.236451749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a15fde25-bb7f-4d93-8e33-208ff13a5c72 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.274716641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0a58e4d-ec46-4d2f-a050-7c10ee914a81 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.274802071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0a58e4d-ec46-4d2f-a050-7c10ee914a81 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.276143643Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7194406e-d587-4c94-8429-b89206271c3c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.276617468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774304276594620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7194406e-d587-4c94-8429-b89206271c3c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.277235064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eeabd34a-8398-4c5d-aba8-57b3fe17d427 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.277455435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eeabd34a-8398-4c5d-aba8-57b3fe17d427 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.277807922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eeabd34a-8398-4c5d-aba8-57b3fe17d427 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.321349078Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbfbc58b-8e25-428e-ac57-da2f6d1641ac name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.321439854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbfbc58b-8e25-428e-ac57-da2f6d1641ac name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.322812830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=123d4cfc-e779-4c4c-847d-09f7ec54a963 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.323446212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774304323420634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=123d4cfc-e779-4c4c-847d-09f7ec54a963 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.324016552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4892a845-d0b8-4375-8165-378f46cb8fc8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.324084565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4892a845-d0b8-4375-8165-378f46cb8fc8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.324775726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4892a845-d0b8-4375-8165-378f46cb8fc8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.365233750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a7c6762-cd51-4ccb-a846-2de003707ef0 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.365350691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a7c6762-cd51-4ccb-a846-2de003707ef0 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.366566174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66da0378-b919-4d60-9e83-d1eafb5314ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.367061106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774304367037899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66da0378-b919-4d60-9e83-d1eafb5314ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.367490496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=929718c5-2255-4d23-9c4a-081f3d9ada76 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.367545860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=929718c5-2255-4d23-9c4a-081f3d9ada76 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:44 ha-076992 crio[661]: time="2024-09-19 19:31:44.367787648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=929718c5-2255-4d23-9c4a-081f3d9ada76 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52db63dad4c31       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a8aaf854df641       busybox-7dff88458-8wfb7
	17ef846dadbee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   8583d1eda759f       coredns-7c65d6cfc9-nbds4
	cbaa19f6b3857       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   d65bb54e4c426       coredns-7c65d6cfc9-bst8x
	6eb7d57489862       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   5d96139db90a8       storage-provisioner
	d623b5f012d8a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   0273544afdfa6       kindnet-j846w
	9d62ecb2cc70a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   2a6c6ac66a434       kube-proxy-4d8dc
	3132b4bb29e16       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   9f7ef19609750       kube-vip-ha-076992
	5745c8d186325       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   09b02f34308ad       kube-scheduler-ha-076992
	f7da5064b19f5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   9cebb02c5eed5       kube-apiserver-ha-076992
	3beffc038ef33       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   fc5737a4c0f5c       etcd-ha-076992
	5b605d500b3ee       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   6a8db8524df21       kube-controller-manager-ha-076992
	
	
	==> coredns [17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3] <==
	[INFO] 10.244.0.4:34108 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.006817779s
	[INFO] 10.244.0.4:40322 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013826742s
	[INFO] 10.244.1.2:55399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298188s
	[INFO] 10.244.1.2:35261 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000170423s
	[INFO] 10.244.2.2:57349 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000113863s
	[INFO] 10.244.2.2:35304 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000093782s
	[INFO] 10.244.0.4:60710 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175542s
	[INFO] 10.244.0.4:56638 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002407779s
	[INFO] 10.244.1.2:60721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148724s
	[INFO] 10.244.2.2:40070 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138971s
	[INFO] 10.244.2.2:53394 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186542s
	[INFO] 10.244.2.2:54178 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000225634s
	[INFO] 10.244.2.2:53480 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001438271s
	[INFO] 10.244.2.2:48475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168626s
	[INFO] 10.244.2.2:49380 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160453s
	[INFO] 10.244.2.2:38326 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100289s
	[INFO] 10.244.1.2:47564 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107018s
	[INFO] 10.244.0.4:55521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119496s
	[INFO] 10.244.0.4:51830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118694s
	[INFO] 10.244.0.4:49301 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000181413s
	[INFO] 10.244.1.2:38961 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124955s
	[INFO] 10.244.1.2:37060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092863s
	[INFO] 10.244.1.2:44024 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085892s
	[INFO] 10.244.2.2:35688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014156s
	[INFO] 10.244.2.2:33974 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170311s
	
	
	==> coredns [cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0] <==
	[INFO] 10.244.0.4:45775 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000206662s
	[INFO] 10.244.0.4:34019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123934s
	[INFO] 10.244.1.2:60797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218519s
	[INFO] 10.244.1.2:44944 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001794304s
	[INFO] 10.244.1.2:51111 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185225s
	[INFO] 10.244.1.2:46956 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160685s
	[INFO] 10.244.1.2:36318 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001321241s
	[INFO] 10.244.1.2:53158 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118134s
	[INFO] 10.244.1.2:45995 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102925s
	[INFO] 10.244.2.2:55599 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001757807s
	[INFO] 10.244.0.4:50520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118756s
	[INFO] 10.244.0.4:48294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189838s
	[INFO] 10.244.0.4:52710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005729s
	[INFO] 10.244.0.4:56525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085763s
	[INFO] 10.244.1.2:43917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168832s
	[INFO] 10.244.1.2:34972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200932s
	[INFO] 10.244.1.2:50680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181389s
	[INFO] 10.244.2.2:51430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152587s
	[INFO] 10.244.2.2:37924 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000317695s
	[INFO] 10.244.2.2:46377 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000371446s
	[INFO] 10.244.2.2:36790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012815s
	[INFO] 10.244.0.4:35196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000409388s
	[INFO] 10.244.1.2:43265 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235404s
	[INFO] 10.244.2.2:56515 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113892s
	[INFO] 10.244.2.2:33574 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251263s
	
	
	==> describe nodes <==
	Name:               ha-076992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T19_25_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:31:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    ha-076992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 88962b0779f84ff6915974a39d1a24ba
	  System UUID:                88962b07-79f8-4ff6-9159-74a39d1a24ba
	  Boot ID:                    f4736dd6-fd6e-4dc3-b2ee-64f8773325ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8wfb7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-7c65d6cfc9-bst8x             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m8s
	  kube-system                 coredns-7c65d6cfc9-nbds4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m8s
	  kube-system                 etcd-ha-076992                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m13s
	  kube-system                 kindnet-j846w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m9s
	  kube-system                 kube-apiserver-ha-076992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-controller-manager-ha-076992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-proxy-4d8dc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-scheduler-ha-076992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-vip-ha-076992                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m6s   kube-proxy       
	  Normal  Starting                 6m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m13s  kubelet          Node ha-076992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s  kubelet          Node ha-076992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s  kubelet          Node ha-076992 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m10s  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal  NodeReady                5m55s  kubelet          Node ha-076992 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal  RegisteredNode           3m59s  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	
	
	Name:               ha-076992-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_26_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:26:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:29:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-076992-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fbb92a6f6fa49d49b42ed70b015086d
	  System UUID:                7fbb92a6-f6fa-49d4-9b42-ed70b015086d
	  Boot ID:                    d99d8bb8-fed0-4ef9-95a0-7b5cb6b4a8e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c64rv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-076992-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m18s
	  kube-system                 kindnet-6d8pz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m20s
	  kube-system                 kube-apiserver-ha-076992-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-076992-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-tjtfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-scheduler-ha-076992-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-vip-ha-076992-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-076992-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-076992-m02 status is now: NodeNotReady
	
	
	Name:               ha-076992-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_27_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:27:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:31:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.66
	  Hostname:    ha-076992-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0db72b5d16d8492b8f2f42e6cedd7691
	  System UUID:                0db72b5d-16d8-492b-8f2f-42e6cedd7691
	  Boot ID:                    a11e77a1-44c6-47d3-9894-1e2db25df61f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jl6lr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-076992-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m5s
	  kube-system                 kindnet-89gmh                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m7s
	  kube-system                 kube-apiserver-ha-076992-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-controller-manager-ha-076992-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-proxy-4qxzr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-ha-076992-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-vip-ha-076992-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node ha-076992-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node ha-076992-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node ha-076992-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	
	
	Name:               ha-076992-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_28_43_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:28:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:31:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:28:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:28:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:28:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:29:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-076992-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37704cd295b34d23a0864637f4482597
	  System UUID:                37704cd2-95b3-4d23-a086-4637f4482597
	  Boot ID:                    7afcea43-e30f-4573-9142-69832448eb86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8jqvd       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-8gt7w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-076992-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-076992-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep19 19:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050539] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040218] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779433] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep19 19:25] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.560626] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.418534] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.061113] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050106] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.181483] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.133235] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.281192] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.948588] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.762419] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.059014] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.974334] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.083682] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344336] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.503085] kauditd_printk_skb: 38 callbacks suppressed
	[Sep19 19:26] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018] <==
	{"level":"warn","ts":"2024-09-19T19:31:44.642030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.654253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.657687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.660376Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.664098Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.674829Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.679955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.684214Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.757355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.758061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.762104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.763137Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.768877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.775697Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.779129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.782515Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.788392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.794844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.801256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.804689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.807608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.810537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.816865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.823326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:44.857956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:31:44 up 6 min,  0 users,  load average: 0.13, 0.19, 0.10
	Linux ha-076992 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4] <==
	I0919 19:31:09.295666       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:31:19.301225       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:31:19.301271       1 main.go:299] handling current node
	I0919 19:31:19.301287       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:31:19.301293       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:31:19.301469       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:31:19.301493       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:31:19.301622       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:31:19.301654       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:31:29.299423       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:31:29.299534       1 main.go:299] handling current node
	I0919 19:31:29.299588       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:31:29.299608       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:31:29.299733       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:31:29.299753       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:31:29.299816       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:31:29.299834       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:31:39.295069       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:31:39.295797       1 main.go:299] handling current node
	I0919 19:31:39.295864       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:31:39.295880       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:31:39.296147       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:31:39.296174       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:31:39.296250       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:31:39.296272       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501] <==
	I0919 19:25:31.486188       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 19:25:31.506649       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 19:25:35.598891       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0919 19:25:35.750237       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0919 19:27:38.100207       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 13.658µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 19:27:38.100632       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 19:27:38.102611       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 19:27:38.103892       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 19:27:38.105160       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.382601ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0919 19:28:11.389256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45218: use of closed network connection
	E0919 19:28:11.576268       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45246: use of closed network connection
	E0919 19:28:11.773899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45258: use of closed network connection
	E0919 19:28:11.977200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45272: use of closed network connection
	E0919 19:28:12.158836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45298: use of closed network connection
	E0919 19:28:12.343311       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45308: use of closed network connection
	E0919 19:28:12.533653       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45320: use of closed network connection
	E0919 19:28:12.708696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45336: use of closed network connection
	E0919 19:28:12.880339       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45348: use of closed network connection
	E0919 19:28:13.172557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45378: use of closed network connection
	E0919 19:28:13.360524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45402: use of closed network connection
	E0919 19:28:13.537403       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45414: use of closed network connection
	E0919 19:28:13.726245       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45428: use of closed network connection
	E0919 19:28:13.903745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45458: use of closed network connection
	E0919 19:28:14.076234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45480: use of closed network connection
	W0919 19:29:39.951311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.173 192.168.39.66]
	
	
	==> kube-controller-manager [5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b] <==
	I0919 19:28:42.651135       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-076992-m04\" does not exist"
	I0919 19:28:42.696072       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-076992-m04" podCIDRs=["10.244.3.0/24"]
	I0919 19:28:42.696237       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:42.696385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:42.984651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:43.058418       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:43.437129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:44.991734       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-076992-m04"
	I0919 19:28:44.991858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:45.053922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:45.913734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:45.955524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:52.981964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:03.869117       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076992-m04"
	I0919 19:29:03.870215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:03.885512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:05.009111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:13.638377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:30:00.034775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:30:00.035207       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076992-m04"
	I0919 19:30:00.059561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:30:00.073804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.744937ms"
	I0919 19:30:00.073933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.501µs"
	I0919 19:30:00.989765       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:30:05.283636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	
	
	==> kube-proxy [9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 19:25:37.903821       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 19:25:37.932314       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.173"]
	E0919 19:25:37.932452       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:25:37.975043       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 19:25:37.975079       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 19:25:37.975107       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:25:37.978675       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:25:37.979280       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:25:37.979417       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:25:37.981041       1 config.go:199] "Starting service config controller"
	I0919 19:25:37.981519       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:25:37.981599       1 config.go:328] "Starting node config controller"
	I0919 19:25:37.981623       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:25:37.982405       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:25:37.982433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:25:38.081647       1 shared_informer.go:320] Caches are synced for service config
	I0919 19:25:38.081721       1 shared_informer.go:320] Caches are synced for node config
	I0919 19:25:38.082821       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1] <==
	W0919 19:25:29.292699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 19:25:29.292789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.292883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 19:25:29.292917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.315628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 19:25:29.315915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.317062       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 19:25:29.317708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.375676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 19:25:29.375771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.399790       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 19:25:29.399959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.458469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 19:25:29.458568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.500384       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 19:25:29.500442       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0919 19:25:32.657764       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 19:28:06.097590       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jl6lr\": pod busybox-7dff88458-jl6lr is already assigned to node \"ha-076992-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jl6lr" node="ha-076992-m03"
	E0919 19:28:06.098198       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3f7ee95d-11f9-4073-8fa9-d4aa5fc08d99(default/busybox-7dff88458-jl6lr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jl6lr"
	E0919 19:28:06.098359       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jl6lr\": pod busybox-7dff88458-jl6lr is already assigned to node \"ha-076992-m03\"" pod="default/busybox-7dff88458-jl6lr"
	I0919 19:28:06.098540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jl6lr" node="ha-076992-m03"
	E0919 19:28:06.176510       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8wfb7\": pod busybox-7dff88458-8wfb7 is already assigned to node \"ha-076992\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8wfb7" node="ha-076992"
	E0919 19:28:06.176725       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e9e5cd58-874f-41c6-8c0a-d37b5101a1f9(default/busybox-7dff88458-8wfb7) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8wfb7"
	E0919 19:28:06.181327       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8wfb7\": pod busybox-7dff88458-8wfb7 is already assigned to node \"ha-076992\"" pod="default/busybox-7dff88458-8wfb7"
	I0919 19:28:06.181857       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8wfb7" node="ha-076992"
	
	
	==> kubelet <==
	Sep 19 19:30:31 ha-076992 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 19:30:31 ha-076992 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 19:30:31 ha-076992 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 19:30:31 ha-076992 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 19:30:31 ha-076992 kubelet[1304]: E0919 19:30:31.509860    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774231509247618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:31 ha-076992 kubelet[1304]: E0919 19:30:31.509926    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774231509247618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:41 ha-076992 kubelet[1304]: E0919 19:30:41.515125    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774241513934130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:41 ha-076992 kubelet[1304]: E0919 19:30:41.515489    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774241513934130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:51 ha-076992 kubelet[1304]: E0919 19:30:51.516656    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774251516247410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:51 ha-076992 kubelet[1304]: E0919 19:30:51.516759    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774251516247410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:01 ha-076992 kubelet[1304]: E0919 19:31:01.520748    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774261520199169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:01 ha-076992 kubelet[1304]: E0919 19:31:01.520803    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774261520199169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:11 ha-076992 kubelet[1304]: E0919 19:31:11.523342    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774271522952876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:11 ha-076992 kubelet[1304]: E0919 19:31:11.523611    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774271522952876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:21 ha-076992 kubelet[1304]: E0919 19:31:21.527464    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774281526662586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:21 ha-076992 kubelet[1304]: E0919 19:31:21.527558    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774281526662586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:31 ha-076992 kubelet[1304]: E0919 19:31:31.406408    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 19 19:31:31 ha-076992 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 19:31:31 ha-076992 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 19:31:31 ha-076992 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 19:31:31 ha-076992 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 19:31:31 ha-076992 kubelet[1304]: E0919 19:31:31.535893    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774291534622152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:31 ha-076992 kubelet[1304]: E0919 19:31:31.535937    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774291534622152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:41 ha-076992 kubelet[1304]: E0919 19:31:41.537584    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774301537350727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:41 ha-076992 kubelet[1304]: E0919 19:31:41.537608    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774301537350727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-076992 -n ha-076992
helpers_test.go:261: (dbg) Run:  kubectl --context ha-076992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.40301658s)
ha_test.go:413: expected profile "ha-076992" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-076992\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-076992\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-076992\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.173\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.232\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.66\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.157\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false
,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountI
P\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-076992 -n ha-076992
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-076992 logs -n 25: (1.351207847s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992:/home/docker/cp-test_ha-076992-m03_ha-076992.txt                       |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992 sudo cat                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992.txt                                 |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m02:/home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m04 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp testdata/cp-test.txt                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992:/home/docker/cp-test_ha-076992-m04_ha-076992.txt                       |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992 sudo cat                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992.txt                                 |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m02:/home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03:/home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m03 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-076992 node stop m02 -v=7                                                     | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 19:24:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 19:24:50.546945   29946 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:24:50.547063   29946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:50.547072   29946 out.go:358] Setting ErrFile to fd 2...
	I0919 19:24:50.547076   29946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:50.547225   29946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:24:50.547763   29946 out.go:352] Setting JSON to false
	I0919 19:24:50.548588   29946 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4035,"bootTime":1726769856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:24:50.548689   29946 start.go:139] virtualization: kvm guest
	I0919 19:24:50.550911   29946 out.go:177] * [ha-076992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:24:50.552265   29946 notify.go:220] Checking for updates...
	I0919 19:24:50.552285   29946 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:24:50.553819   29946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:24:50.555250   29946 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:24:50.556710   29946 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:50.557978   29946 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:24:50.559199   29946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:24:50.560718   29946 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:24:50.593907   29946 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 19:24:50.595154   29946 start.go:297] selected driver: kvm2
	I0919 19:24:50.595169   29946 start.go:901] validating driver "kvm2" against <nil>
	I0919 19:24:50.595180   29946 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:24:50.595817   29946 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:24:50.595876   29946 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 19:24:50.610266   29946 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 19:24:50.610336   29946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 19:24:50.610614   29946 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:24:50.610657   29946 cni.go:84] Creating CNI manager for ""
	I0919 19:24:50.610702   29946 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 19:24:50.610710   29946 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 19:24:50.610777   29946 start.go:340] cluster config:
	{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0919 19:24:50.610877   29946 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:24:50.612616   29946 out.go:177] * Starting "ha-076992" primary control-plane node in "ha-076992" cluster
	I0919 19:24:50.613886   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:24:50.613919   29946 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 19:24:50.613930   29946 cache.go:56] Caching tarball of preloaded images
	I0919 19:24:50.614002   29946 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:24:50.614013   29946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:24:50.614333   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:24:50.614355   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json: {Name:mk8d4afdb9fa7e7321b4f997efa478fa6418ce40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:24:50.614511   29946 start.go:360] acquireMachinesLock for ha-076992: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:24:50.614545   29946 start.go:364] duration metric: took 19.183µs to acquireMachinesLock for "ha-076992"
	I0919 19:24:50.614566   29946 start.go:93] Provisioning new machine with config: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:24:50.614666   29946 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 19:24:50.616202   29946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 19:24:50.616319   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:24:50.616360   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:24:50.630334   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39147
	I0919 19:24:50.630824   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:24:50.631360   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:24:50.631387   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:24:50.631735   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:24:50.631911   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:24:50.632045   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:24:50.632261   29946 start.go:159] libmachine.API.Create for "ha-076992" (driver="kvm2")
	I0919 19:24:50.632292   29946 client.go:168] LocalClient.Create starting
	I0919 19:24:50.632325   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem
	I0919 19:24:50.632369   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:24:50.632396   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:24:50.632469   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem
	I0919 19:24:50.632497   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:24:50.632517   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:24:50.632546   29946 main.go:141] libmachine: Running pre-create checks...
	I0919 19:24:50.632558   29946 main.go:141] libmachine: (ha-076992) Calling .PreCreateCheck
	I0919 19:24:50.632876   29946 main.go:141] libmachine: (ha-076992) Calling .GetConfigRaw
	I0919 19:24:50.633289   29946 main.go:141] libmachine: Creating machine...
	I0919 19:24:50.633304   29946 main.go:141] libmachine: (ha-076992) Calling .Create
	I0919 19:24:50.633442   29946 main.go:141] libmachine: (ha-076992) Creating KVM machine...
	I0919 19:24:50.634573   29946 main.go:141] libmachine: (ha-076992) DBG | found existing default KVM network
	I0919 19:24:50.635280   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:50.635109   29969 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0919 19:24:50.635311   29946 main.go:141] libmachine: (ha-076992) DBG | created network xml: 
	I0919 19:24:50.635327   29946 main.go:141] libmachine: (ha-076992) DBG | <network>
	I0919 19:24:50.635345   29946 main.go:141] libmachine: (ha-076992) DBG |   <name>mk-ha-076992</name>
	I0919 19:24:50.635359   29946 main.go:141] libmachine: (ha-076992) DBG |   <dns enable='no'/>
	I0919 19:24:50.635371   29946 main.go:141] libmachine: (ha-076992) DBG |   
	I0919 19:24:50.635380   29946 main.go:141] libmachine: (ha-076992) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0919 19:24:50.635421   29946 main.go:141] libmachine: (ha-076992) DBG |     <dhcp>
	I0919 19:24:50.635435   29946 main.go:141] libmachine: (ha-076992) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0919 19:24:50.635458   29946 main.go:141] libmachine: (ha-076992) DBG |     </dhcp>
	I0919 19:24:50.635488   29946 main.go:141] libmachine: (ha-076992) DBG |   </ip>
	I0919 19:24:50.635501   29946 main.go:141] libmachine: (ha-076992) DBG |   
	I0919 19:24:50.635515   29946 main.go:141] libmachine: (ha-076992) DBG | </network>
	I0919 19:24:50.635528   29946 main.go:141] libmachine: (ha-076992) DBG | 
	I0919 19:24:50.640246   29946 main.go:141] libmachine: (ha-076992) DBG | trying to create private KVM network mk-ha-076992 192.168.39.0/24...
	I0919 19:24:50.704681   29946 main.go:141] libmachine: (ha-076992) DBG | private KVM network mk-ha-076992 192.168.39.0/24 created
	I0919 19:24:50.704725   29946 main.go:141] libmachine: (ha-076992) Setting up store path in /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992 ...
	I0919 19:24:50.704741   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:50.704651   29969 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:50.704763   29946 main.go:141] libmachine: (ha-076992) Building disk image from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 19:24:50.704783   29946 main.go:141] libmachine: (ha-076992) Downloading /home/jenkins/minikube-integration/19664-7917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0919 19:24:50.947095   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:50.946892   29969 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa...
	I0919 19:24:51.013606   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:51.013482   29969 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/ha-076992.rawdisk...
	I0919 19:24:51.013627   29946 main.go:141] libmachine: (ha-076992) DBG | Writing magic tar header
	I0919 19:24:51.013637   29946 main.go:141] libmachine: (ha-076992) DBG | Writing SSH key tar header
	I0919 19:24:51.013650   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:51.013598   29969 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992 ...
	I0919 19:24:51.013757   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992
	I0919 19:24:51.013788   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992 (perms=drwx------)
	I0919 19:24:51.013802   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines
	I0919 19:24:51.013816   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:51.013823   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917
	I0919 19:24:51.013833   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines (perms=drwxr-xr-x)
	I0919 19:24:51.013844   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 19:24:51.013855   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube (perms=drwxr-xr-x)
	I0919 19:24:51.013870   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917 (perms=drwxrwxr-x)
	I0919 19:24:51.013881   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 19:24:51.013890   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 19:24:51.013899   29946 main.go:141] libmachine: (ha-076992) Creating domain...
	I0919 19:24:51.013908   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins
	I0919 19:24:51.013915   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home
	I0919 19:24:51.013924   29946 main.go:141] libmachine: (ha-076992) DBG | Skipping /home - not owner
	I0919 19:24:51.014892   29946 main.go:141] libmachine: (ha-076992) define libvirt domain using xml: 
	I0919 19:24:51.014904   29946 main.go:141] libmachine: (ha-076992) <domain type='kvm'>
	I0919 19:24:51.014910   29946 main.go:141] libmachine: (ha-076992)   <name>ha-076992</name>
	I0919 19:24:51.014944   29946 main.go:141] libmachine: (ha-076992)   <memory unit='MiB'>2200</memory>
	I0919 19:24:51.014958   29946 main.go:141] libmachine: (ha-076992)   <vcpu>2</vcpu>
	I0919 19:24:51.014968   29946 main.go:141] libmachine: (ha-076992)   <features>
	I0919 19:24:51.014975   29946 main.go:141] libmachine: (ha-076992)     <acpi/>
	I0919 19:24:51.014982   29946 main.go:141] libmachine: (ha-076992)     <apic/>
	I0919 19:24:51.015012   29946 main.go:141] libmachine: (ha-076992)     <pae/>
	I0919 19:24:51.015033   29946 main.go:141] libmachine: (ha-076992)     
	I0919 19:24:51.015043   29946 main.go:141] libmachine: (ha-076992)   </features>
	I0919 19:24:51.015052   29946 main.go:141] libmachine: (ha-076992)   <cpu mode='host-passthrough'>
	I0919 19:24:51.015061   29946 main.go:141] libmachine: (ha-076992)   
	I0919 19:24:51.015070   29946 main.go:141] libmachine: (ha-076992)   </cpu>
	I0919 19:24:51.015078   29946 main.go:141] libmachine: (ha-076992)   <os>
	I0919 19:24:51.015088   29946 main.go:141] libmachine: (ha-076992)     <type>hvm</type>
	I0919 19:24:51.015098   29946 main.go:141] libmachine: (ha-076992)     <boot dev='cdrom'/>
	I0919 19:24:51.015117   29946 main.go:141] libmachine: (ha-076992)     <boot dev='hd'/>
	I0919 19:24:51.015130   29946 main.go:141] libmachine: (ha-076992)     <bootmenu enable='no'/>
	I0919 19:24:51.015139   29946 main.go:141] libmachine: (ha-076992)   </os>
	I0919 19:24:51.015171   29946 main.go:141] libmachine: (ha-076992)   <devices>
	I0919 19:24:51.015199   29946 main.go:141] libmachine: (ha-076992)     <disk type='file' device='cdrom'>
	I0919 19:24:51.015212   29946 main.go:141] libmachine: (ha-076992)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/boot2docker.iso'/>
	I0919 19:24:51.015227   29946 main.go:141] libmachine: (ha-076992)       <target dev='hdc' bus='scsi'/>
	I0919 19:24:51.015247   29946 main.go:141] libmachine: (ha-076992)       <readonly/>
	I0919 19:24:51.015259   29946 main.go:141] libmachine: (ha-076992)     </disk>
	I0919 19:24:51.015272   29946 main.go:141] libmachine: (ha-076992)     <disk type='file' device='disk'>
	I0919 19:24:51.015287   29946 main.go:141] libmachine: (ha-076992)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 19:24:51.015303   29946 main.go:141] libmachine: (ha-076992)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/ha-076992.rawdisk'/>
	I0919 19:24:51.015314   29946 main.go:141] libmachine: (ha-076992)       <target dev='hda' bus='virtio'/>
	I0919 19:24:51.015325   29946 main.go:141] libmachine: (ha-076992)     </disk>
	I0919 19:24:51.015334   29946 main.go:141] libmachine: (ha-076992)     <interface type='network'>
	I0919 19:24:51.015347   29946 main.go:141] libmachine: (ha-076992)       <source network='mk-ha-076992'/>
	I0919 19:24:51.015371   29946 main.go:141] libmachine: (ha-076992)       <model type='virtio'/>
	I0919 19:24:51.015382   29946 main.go:141] libmachine: (ha-076992)     </interface>
	I0919 19:24:51.015392   29946 main.go:141] libmachine: (ha-076992)     <interface type='network'>
	I0919 19:24:51.015402   29946 main.go:141] libmachine: (ha-076992)       <source network='default'/>
	I0919 19:24:51.015412   29946 main.go:141] libmachine: (ha-076992)       <model type='virtio'/>
	I0919 19:24:51.015420   29946 main.go:141] libmachine: (ha-076992)     </interface>
	I0919 19:24:51.015432   29946 main.go:141] libmachine: (ha-076992)     <serial type='pty'>
	I0919 19:24:51.015443   29946 main.go:141] libmachine: (ha-076992)       <target port='0'/>
	I0919 19:24:51.015451   29946 main.go:141] libmachine: (ha-076992)     </serial>
	I0919 19:24:51.015462   29946 main.go:141] libmachine: (ha-076992)     <console type='pty'>
	I0919 19:24:51.015471   29946 main.go:141] libmachine: (ha-076992)       <target type='serial' port='0'/>
	I0919 19:24:51.015502   29946 main.go:141] libmachine: (ha-076992)     </console>
	I0919 19:24:51.015516   29946 main.go:141] libmachine: (ha-076992)     <rng model='virtio'>
	I0919 19:24:51.015528   29946 main.go:141] libmachine: (ha-076992)       <backend model='random'>/dev/random</backend>
	I0919 19:24:51.015538   29946 main.go:141] libmachine: (ha-076992)     </rng>
	I0919 19:24:51.015546   29946 main.go:141] libmachine: (ha-076992)     
	I0919 19:24:51.015554   29946 main.go:141] libmachine: (ha-076992)     
	I0919 19:24:51.015563   29946 main.go:141] libmachine: (ha-076992)   </devices>
	I0919 19:24:51.015571   29946 main.go:141] libmachine: (ha-076992) </domain>
	I0919 19:24:51.015594   29946 main.go:141] libmachine: (ha-076992) 
	I0919 19:24:51.019925   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:db:cf:56 in network default
	I0919 19:24:51.020474   29946 main.go:141] libmachine: (ha-076992) Ensuring networks are active...
	I0919 19:24:51.020498   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:51.021112   29946 main.go:141] libmachine: (ha-076992) Ensuring network default is active
	I0919 19:24:51.021403   29946 main.go:141] libmachine: (ha-076992) Ensuring network mk-ha-076992 is active
	I0919 19:24:51.021908   29946 main.go:141] libmachine: (ha-076992) Getting domain xml...
	I0919 19:24:51.022590   29946 main.go:141] libmachine: (ha-076992) Creating domain...
	I0919 19:24:52.199008   29946 main.go:141] libmachine: (ha-076992) Waiting to get IP...
	I0919 19:24:52.199822   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:52.200184   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:52.200222   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:52.200179   29969 retry.go:31] will retry after 305.917546ms: waiting for machine to come up
	I0919 19:24:52.507816   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:52.508347   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:52.508367   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:52.508306   29969 retry.go:31] will retry after 257.743777ms: waiting for machine to come up
	I0919 19:24:52.767675   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:52.768093   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:52.768147   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:52.768045   29969 retry.go:31] will retry after 451.176186ms: waiting for machine to come up
	I0919 19:24:53.220690   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:53.221075   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:53.221127   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:53.221017   29969 retry.go:31] will retry after 532.893204ms: waiting for machine to come up
	I0919 19:24:53.755758   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:53.756124   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:53.756151   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:53.756077   29969 retry.go:31] will retry after 735.36183ms: waiting for machine to come up
	I0919 19:24:54.492954   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:54.493288   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:54.493311   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:54.493234   29969 retry.go:31] will retry after 820.552907ms: waiting for machine to come up
	I0919 19:24:55.315112   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:55.315416   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:55.315452   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:55.315388   29969 retry.go:31] will retry after 1.159630492s: waiting for machine to come up
	I0919 19:24:56.476212   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:56.476585   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:56.476603   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:56.476554   29969 retry.go:31] will retry after 1.27132767s: waiting for machine to come up
	I0919 19:24:57.749988   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:57.750422   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:57.750445   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:57.750374   29969 retry.go:31] will retry after 1.45971409s: waiting for machine to come up
	I0919 19:24:59.211323   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:59.211646   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:59.211667   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:59.211594   29969 retry.go:31] will retry after 1.806599967s: waiting for machine to come up
	I0919 19:25:01.019773   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:01.020204   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:01.020230   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:01.020169   29969 retry.go:31] will retry after 1.98521469s: waiting for machine to come up
	I0919 19:25:03.008256   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:03.008710   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:03.008731   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:03.008667   29969 retry.go:31] will retry after 3.161929877s: waiting for machine to come up
	I0919 19:25:06.172436   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:06.172851   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:06.172870   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:06.172810   29969 retry.go:31] will retry after 3.065142974s: waiting for machine to come up
	I0919 19:25:09.242150   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:09.242595   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:09.242618   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:09.242551   29969 retry.go:31] will retry after 4.628547568s: waiting for machine to come up
	I0919 19:25:13.875203   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.875628   29946 main.go:141] libmachine: (ha-076992) Found IP for machine: 192.168.39.173
	I0919 19:25:13.875655   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has current primary IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.875661   29946 main.go:141] libmachine: (ha-076992) Reserving static IP address...
	I0919 19:25:13.876020   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find host DHCP lease matching {name: "ha-076992", mac: "52:54:00:7d:f5:95", ip: "192.168.39.173"} in network mk-ha-076992
	I0919 19:25:13.945252   29946 main.go:141] libmachine: (ha-076992) DBG | Getting to WaitForSSH function...
	I0919 19:25:13.945280   29946 main.go:141] libmachine: (ha-076992) Reserved static IP address: 192.168.39.173
	I0919 19:25:13.945289   29946 main.go:141] libmachine: (ha-076992) Waiting for SSH to be available...
	I0919 19:25:13.947766   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.948158   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:13.948194   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.948312   29946 main.go:141] libmachine: (ha-076992) DBG | Using SSH client type: external
	I0919 19:25:13.948335   29946 main.go:141] libmachine: (ha-076992) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa (-rw-------)
	I0919 19:25:13.948378   29946 main.go:141] libmachine: (ha-076992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 19:25:13.948385   29946 main.go:141] libmachine: (ha-076992) DBG | About to run SSH command:
	I0919 19:25:13.948400   29946 main.go:141] libmachine: (ha-076992) DBG | exit 0
	I0919 19:25:14.069031   29946 main.go:141] libmachine: (ha-076992) DBG | SSH cmd err, output: <nil>: 
	I0919 19:25:14.069310   29946 main.go:141] libmachine: (ha-076992) KVM machine creation complete!
	I0919 19:25:14.069628   29946 main.go:141] libmachine: (ha-076992) Calling .GetConfigRaw
	I0919 19:25:14.070250   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:14.070406   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:14.070540   29946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 19:25:14.070554   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:14.072128   29946 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 19:25:14.072140   29946 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 19:25:14.072145   29946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 19:25:14.072151   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.074112   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.074425   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.074456   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.074626   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.074770   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.074885   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.074971   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.075077   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.075278   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.075290   29946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 19:25:14.176659   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:14.176688   29946 main.go:141] libmachine: Detecting the provisioner...
	I0919 19:25:14.176697   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.179372   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.179694   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.179715   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.179850   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.180053   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.180210   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.180361   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.180525   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.180682   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.180691   29946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 19:25:14.282081   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0919 19:25:14.282192   29946 main.go:141] libmachine: found compatible host: buildroot
	I0919 19:25:14.282206   29946 main.go:141] libmachine: Provisioning with buildroot...
	I0919 19:25:14.282215   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:25:14.282509   29946 buildroot.go:166] provisioning hostname "ha-076992"
	I0919 19:25:14.282531   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:25:14.282795   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.286540   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.286900   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.286924   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.287087   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.287264   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.287404   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.287528   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.287657   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.287847   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.287862   29946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992 && echo "ha-076992" | sudo tee /etc/hostname
	I0919 19:25:14.405366   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:25:14.405398   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.408109   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.408451   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.408503   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.408709   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.408884   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.409027   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.409148   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.409275   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.409515   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.409532   29946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:25:14.518352   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:14.518409   29946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:25:14.518432   29946 buildroot.go:174] setting up certificates
	I0919 19:25:14.518441   29946 provision.go:84] configureAuth start
	I0919 19:25:14.518450   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:25:14.518683   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:14.520859   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.521176   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.521197   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.521352   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.523136   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.523477   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.523502   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.523620   29946 provision.go:143] copyHostCerts
	I0919 19:25:14.523651   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:14.523697   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:25:14.523707   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:14.523782   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:25:14.523897   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:14.523925   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:25:14.523934   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:14.523976   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:25:14.524055   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:14.524076   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:25:14.524085   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:14.524119   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:25:14.524203   29946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992 san=[127.0.0.1 192.168.39.173 ha-076992 localhost minikube]
	I0919 19:25:14.665666   29946 provision.go:177] copyRemoteCerts
	I0919 19:25:14.665718   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:25:14.665740   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.668329   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.668676   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.668708   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.668855   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.669012   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.669229   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.669429   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:14.751236   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:25:14.751315   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:25:14.776009   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:25:14.776073   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 19:25:14.800333   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:25:14.800401   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:25:14.824393   29946 provision.go:87] duration metric: took 305.938756ms to configureAuth
	I0919 19:25:14.824421   29946 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:25:14.824627   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:14.824707   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.827604   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.827968   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.827993   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.828193   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.828404   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.828556   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.828663   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.828790   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.829402   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.829444   29946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:25:15.045474   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:25:15.045502   29946 main.go:141] libmachine: Checking connection to Docker...
	I0919 19:25:15.045510   29946 main.go:141] libmachine: (ha-076992) Calling .GetURL
	I0919 19:25:15.046752   29946 main.go:141] libmachine: (ha-076992) DBG | Using libvirt version 6000000
	I0919 19:25:15.048660   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.049036   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.049059   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.049264   29946 main.go:141] libmachine: Docker is up and running!
	I0919 19:25:15.049278   29946 main.go:141] libmachine: Reticulating splines...
	I0919 19:25:15.049284   29946 client.go:171] duration metric: took 24.416985175s to LocalClient.Create
	I0919 19:25:15.049305   29946 start.go:167] duration metric: took 24.417044575s to libmachine.API.Create "ha-076992"
	I0919 19:25:15.049317   29946 start.go:293] postStartSetup for "ha-076992" (driver="kvm2")
	I0919 19:25:15.049330   29946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:25:15.049346   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.049548   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:25:15.049567   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.051882   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.052218   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.052245   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.052457   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.052636   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.052818   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.052959   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:15.135380   29946 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:25:15.139841   29946 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:25:15.139871   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:25:15.139953   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:25:15.140035   29946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:25:15.140047   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:25:15.140142   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:25:15.149803   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:25:15.173954   29946 start.go:296] duration metric: took 124.6206ms for postStartSetup
	I0919 19:25:15.174015   29946 main.go:141] libmachine: (ha-076992) Calling .GetConfigRaw
	I0919 19:25:15.174578   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:15.176983   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.177379   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.177404   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.177609   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:15.177797   29946 start.go:128] duration metric: took 24.563118372s to createHost
	I0919 19:25:15.177822   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.179973   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.180294   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.180319   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.180465   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.180655   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.180790   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.180976   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.181181   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:15.181358   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:15.181374   29946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:25:15.282086   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726773915.259292374
	
	I0919 19:25:15.282107   29946 fix.go:216] guest clock: 1726773915.259292374
	I0919 19:25:15.282114   29946 fix.go:229] Guest: 2024-09-19 19:25:15.259292374 +0000 UTC Remote: 2024-09-19 19:25:15.177809817 +0000 UTC m=+24.663846475 (delta=81.482557ms)
	I0919 19:25:15.282172   29946 fix.go:200] guest clock delta is within tolerance: 81.482557ms
	I0919 19:25:15.282183   29946 start.go:83] releasing machines lock for "ha-076992", held for 24.66762655s
	I0919 19:25:15.282207   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.282416   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:15.285015   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.285310   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.285332   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.285551   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.285982   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.286151   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.286236   29946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:25:15.286279   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.286315   29946 ssh_runner.go:195] Run: cat /version.json
	I0919 19:25:15.286338   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.288664   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.288927   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.288997   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.289024   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.289155   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.289279   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.289305   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.289315   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.289547   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.289548   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.289752   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.289745   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:15.289876   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.289970   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:15.362421   29946 ssh_runner.go:195] Run: systemctl --version
	I0919 19:25:15.387771   29946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:25:15.544684   29946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:25:15.550599   29946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:25:15.550653   29946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:25:15.566463   29946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 19:25:15.566486   29946 start.go:495] detecting cgroup driver to use...
	I0919 19:25:15.566538   29946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:25:15.582773   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:25:15.596900   29946 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:25:15.596957   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:25:15.610508   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:25:15.624376   29946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:25:15.733813   29946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:25:15.878726   29946 docker.go:233] disabling docker service ...
	I0919 19:25:15.878810   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:25:15.892801   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:25:15.905716   29946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:25:16.030572   29946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:25:16.160731   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:25:16.174416   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:25:16.192761   29946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:25:16.192830   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.203609   29946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:25:16.203677   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.214426   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.225032   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.235752   29946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:25:16.247045   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.258205   29946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.275682   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.286480   29946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:25:16.296369   29946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 19:25:16.296429   29946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 19:25:16.310714   29946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:25:16.321030   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:25:16.442591   29946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:25:16.537253   29946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:25:16.537333   29946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:25:16.542338   29946 start.go:563] Will wait 60s for crictl version
	I0919 19:25:16.542399   29946 ssh_runner.go:195] Run: which crictl
	I0919 19:25:16.546294   29946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:25:16.588011   29946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:25:16.588101   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:25:16.616308   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:25:16.647185   29946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:25:16.648600   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:16.651059   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:16.651358   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:16.651387   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:16.651601   29946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:25:16.655720   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:25:16.669431   29946 kubeadm.go:883] updating cluster {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 19:25:16.669533   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:25:16.669573   29946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:25:16.706546   29946 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0919 19:25:16.706605   29946 ssh_runner.go:195] Run: which lz4
	I0919 19:25:16.710770   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0919 19:25:16.710856   29946 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 19:25:16.715145   29946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 19:25:16.715174   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0919 19:25:18.046106   29946 crio.go:462] duration metric: took 1.335269784s to copy over tarball
	I0919 19:25:18.046183   29946 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 19:25:20.022215   29946 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.975997168s)
	I0919 19:25:20.022248   29946 crio.go:469] duration metric: took 1.976118647s to extract the tarball
	I0919 19:25:20.022255   29946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 19:25:20.059151   29946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:25:20.102732   29946 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:25:20.102759   29946 cache_images.go:84] Images are preloaded, skipping loading
	I0919 19:25:20.102769   29946 kubeadm.go:934] updating node { 192.168.39.173 8443 v1.31.1 crio true true} ...
	I0919 19:25:20.102901   29946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:25:20.102991   29946 ssh_runner.go:195] Run: crio config
	I0919 19:25:20.149091   29946 cni.go:84] Creating CNI manager for ""
	I0919 19:25:20.149117   29946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 19:25:20.149129   29946 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 19:25:20.149151   29946 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076992 NodeName:ha-076992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 19:25:20.149390   29946 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 19:25:20.149434   29946 kube-vip.go:115] generating kube-vip config ...
	I0919 19:25:20.149487   29946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:25:20.167402   29946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:25:20.167516   29946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:25:20.167589   29946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:25:20.177872   29946 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 19:25:20.177945   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 19:25:20.187340   29946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0919 19:25:20.203708   29946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:25:20.219797   29946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0919 19:25:20.236038   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0919 19:25:20.251815   29946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:25:20.255527   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:25:20.267874   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:25:20.389268   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:25:20.406525   29946 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.173
	I0919 19:25:20.406544   29946 certs.go:194] generating shared ca certs ...
	I0919 19:25:20.406562   29946 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.406708   29946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:25:20.406775   29946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:25:20.406789   29946 certs.go:256] generating profile certs ...
	I0919 19:25:20.406855   29946 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:25:20.406880   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt with IP's: []
	I0919 19:25:20.508433   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt ...
	I0919 19:25:20.508466   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt: {Name:mkfa51b5957d9c0689bd29c9d7ac67976197d1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.508644   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key ...
	I0919 19:25:20.508659   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key: {Name:mke8583745dcb3fd2e449775522b103cfe463401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.508755   29946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77
	I0919 19:25:20.508774   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.254]
	I0919 19:25:20.790439   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77 ...
	I0919 19:25:20.790476   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77: {Name:mk129f473c8ca2bf9c282104464393dd4c0e2ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.790661   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77 ...
	I0919 19:25:20.790678   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77: {Name:mk3e710a4268d5f56461b3aadb1485c362a2d2c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.790775   29946 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:25:20.790887   29946 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:25:20.790975   29946 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:25:20.790995   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt with IP's: []
	I0919 19:25:20.971771   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt ...
	I0919 19:25:20.971802   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt: {Name:mk0aab9d02f395e9da9c35e7e8f603cb6b5cdfc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.971977   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key ...
	I0919 19:25:20.971992   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key: {Name:mke99ffbb66c5a7dba2706f1581886421c464464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.972083   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:25:20.972116   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:25:20.972133   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:25:20.972152   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:25:20.972170   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:25:20.972189   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:25:20.972210   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:25:20.972227   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:25:20.972297   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:25:20.972349   29946 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:25:20.972361   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:25:20.972459   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:25:20.972537   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:25:20.972573   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:25:20.972635   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:25:20.972677   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:20.972699   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:25:20.972718   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:25:20.973287   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:25:20.998208   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:25:21.020664   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:25:21.043465   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:25:21.065487   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 19:25:21.087887   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 19:25:21.110693   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:25:21.134315   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:25:21.159427   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:25:21.209793   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:25:21.234146   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:25:21.256777   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 19:25:21.273318   29946 ssh_runner.go:195] Run: openssl version
	I0919 19:25:21.279164   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:25:21.290077   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:25:21.294953   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:25:21.295015   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:25:21.301042   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:25:21.311548   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:25:21.322467   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:25:21.326955   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:25:21.327033   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:25:21.332698   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:25:21.343007   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:25:21.353411   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:21.357905   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:21.357956   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:21.363494   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:25:21.373947   29946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:25:21.378011   29946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 19:25:21.378067   29946 kubeadm.go:392] StartCluster: {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:25:21.378145   29946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 19:25:21.378195   29946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 19:25:21.414470   29946 cri.go:89] found id: ""
	I0919 19:25:21.414537   29946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 19:25:21.424173   29946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 19:25:21.433474   29946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 19:25:21.442569   29946 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 19:25:21.442585   29946 kubeadm.go:157] found existing configuration files:
	
	I0919 19:25:21.442641   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 19:25:21.456054   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 19:25:21.456094   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 19:25:21.465434   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 19:25:21.474456   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 19:25:21.474516   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 19:25:21.483588   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 19:25:21.492486   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 19:25:21.492535   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 19:25:21.501852   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 19:25:21.510898   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 19:25:21.510940   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 19:25:21.520189   29946 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 19:25:21.636110   29946 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 19:25:21.636193   29946 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 19:25:21.741569   29946 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 19:25:21.741692   29946 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 19:25:21.741840   29946 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 19:25:21.751361   29946 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 19:25:21.850204   29946 out.go:235]   - Generating certificates and keys ...
	I0919 19:25:21.850323   29946 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 19:25:21.850411   29946 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 19:25:22.052364   29946 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 19:25:22.111035   29946 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 19:25:22.319537   29946 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 19:25:22.387119   29946 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 19:25:22.515422   29946 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 19:25:22.515564   29946 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-076992 localhost] and IPs [192.168.39.173 127.0.0.1 ::1]
	I0919 19:25:22.770343   29946 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 19:25:22.770549   29946 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-076992 localhost] and IPs [192.168.39.173 127.0.0.1 ::1]
	I0919 19:25:22.940962   29946 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 19:25:23.141337   29946 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 19:25:23.227103   29946 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 19:25:23.227182   29946 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 19:25:23.339999   29946 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 19:25:23.488595   29946 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 19:25:23.642974   29946 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 19:25:23.798144   29946 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 19:25:24.008881   29946 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 19:25:24.009486   29946 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 19:25:24.014369   29946 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 19:25:24.145863   29946 out.go:235]   - Booting up control plane ...
	I0919 19:25:24.146000   29946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 19:25:24.146123   29946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 19:25:24.146222   29946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 19:25:24.146351   29946 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 19:25:24.146497   29946 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 19:25:24.146584   29946 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 19:25:24.164755   29946 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 19:25:24.164947   29946 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 19:25:24.666140   29946 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.684085ms
	I0919 19:25:24.666245   29946 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 19:25:30.661904   29946 kubeadm.go:310] [api-check] The API server is healthy after 5.999328933s
	I0919 19:25:30.674821   29946 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 19:25:30.694689   29946 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 19:25:30.728456   29946 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 19:25:30.728705   29946 kubeadm.go:310] [mark-control-plane] Marking the node ha-076992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 19:25:30.742484   29946 kubeadm.go:310] [bootstrap-token] Using token: 9riz07.p2i93yajbhhfpock
	I0919 19:25:30.744002   29946 out.go:235]   - Configuring RBAC rules ...
	I0919 19:25:30.744156   29946 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 19:25:30.749173   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 19:25:30.770991   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 19:25:30.778177   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 19:25:30.786779   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 19:25:30.790121   29946 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 19:25:31.069223   29946 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 19:25:31.498557   29946 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 19:25:32.068354   29946 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 19:25:32.068406   29946 kubeadm.go:310] 
	I0919 19:25:32.068512   29946 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 19:25:32.068526   29946 kubeadm.go:310] 
	I0919 19:25:32.068652   29946 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 19:25:32.068663   29946 kubeadm.go:310] 
	I0919 19:25:32.068714   29946 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 19:25:32.068809   29946 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 19:25:32.068885   29946 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 19:25:32.068895   29946 kubeadm.go:310] 
	I0919 19:25:32.068999   29946 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 19:25:32.069019   29946 kubeadm.go:310] 
	I0919 19:25:32.069122   29946 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 19:25:32.069135   29946 kubeadm.go:310] 
	I0919 19:25:32.069210   29946 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 19:25:32.069312   29946 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 19:25:32.069415   29946 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 19:25:32.069425   29946 kubeadm.go:310] 
	I0919 19:25:32.069540   29946 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 19:25:32.069660   29946 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 19:25:32.069677   29946 kubeadm.go:310] 
	I0919 19:25:32.069794   29946 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9riz07.p2i93yajbhhfpock \
	I0919 19:25:32.069948   29946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 \
	I0919 19:25:32.069992   29946 kubeadm.go:310] 	--control-plane 
	I0919 19:25:32.070002   29946 kubeadm.go:310] 
	I0919 19:25:32.070125   29946 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 19:25:32.070153   29946 kubeadm.go:310] 
	I0919 19:25:32.070277   29946 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9riz07.p2i93yajbhhfpock \
	I0919 19:25:32.070418   29946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 
	I0919 19:25:32.071077   29946 kubeadm.go:310] W0919 19:25:21.617150     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 19:25:32.071492   29946 kubeadm.go:310] W0919 19:25:21.618100     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 19:25:32.071645   29946 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 19:25:32.071673   29946 cni.go:84] Creating CNI manager for ""
	I0919 19:25:32.071683   29946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 19:25:32.073578   29946 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 19:25:32.075092   29946 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 19:25:32.080797   29946 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0919 19:25:32.080815   29946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 19:25:32.099353   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 19:25:32.484244   29946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 19:25:32.484317   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:32.484356   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076992 minikube.k8s.io/updated_at=2024_09_19T19_25_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=ha-076992 minikube.k8s.io/primary=true
	I0919 19:25:32.699563   29946 ops.go:34] apiserver oom_adj: -16
	I0919 19:25:32.700092   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:33.200174   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:33.700760   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:34.200308   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:34.700609   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:35.200998   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:35.700578   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:35.798072   29946 kubeadm.go:1113] duration metric: took 3.313794341s to wait for elevateKubeSystemPrivileges
	I0919 19:25:35.798118   29946 kubeadm.go:394] duration metric: took 14.420052871s to StartCluster
	I0919 19:25:35.798147   29946 settings.go:142] acquiring lock: {Name:mk58f627f177d13dd5c0d47e681e886cab43cce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:35.798243   29946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:25:35.799184   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/kubeconfig: {Name:mk632e082e805bb0ee3f336087f78588814f24af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:35.799451   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 19:25:35.799465   29946 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:25:35.799491   29946 start.go:241] waiting for startup goroutines ...
	I0919 19:25:35.799511   29946 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 19:25:35.799597   29946 addons.go:69] Setting storage-provisioner=true in profile "ha-076992"
	I0919 19:25:35.799613   29946 addons.go:234] Setting addon storage-provisioner=true in "ha-076992"
	I0919 19:25:35.799618   29946 addons.go:69] Setting default-storageclass=true in profile "ha-076992"
	I0919 19:25:35.799636   29946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-076992"
	I0919 19:25:35.799646   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:25:35.799697   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:35.800027   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.800066   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.800097   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.800144   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.815590   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0919 19:25:35.815605   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I0919 19:25:35.816049   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.816088   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.816567   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.816586   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.816689   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.816710   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.816987   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.817114   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.817220   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:35.817668   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.817714   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.819378   29946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:25:35.819715   29946 kapi.go:59] client config for ha-076992: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 19:25:35.820225   29946 cert_rotation.go:140] Starting client certificate rotation controller
	I0919 19:25:35.820487   29946 addons.go:234] Setting addon default-storageclass=true in "ha-076992"
	I0919 19:25:35.820530   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:25:35.820906   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.820951   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.833309   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0919 19:25:35.833766   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.834301   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.834327   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.834689   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.834900   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:35.835942   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0919 19:25:35.836351   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.836799   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.836819   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.837143   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:35.837207   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.837734   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.837784   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.839005   29946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 19:25:35.840904   29946 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 19:25:35.840925   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 19:25:35.840944   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:35.844561   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.845133   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:35.845270   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.845469   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:35.845677   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:35.845845   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:35.845998   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:35.854128   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0919 19:25:35.854570   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.855071   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.855094   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.855375   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.855571   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:35.857281   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:35.857490   29946 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 19:25:35.857507   29946 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 19:25:35.857525   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:35.860312   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.860745   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:35.860772   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.860889   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:35.861048   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:35.861242   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:35.861376   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:35.927743   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 19:25:36.004938   29946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 19:25:36.013596   29946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 19:25:36.335279   29946 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 19:25:36.504465   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504493   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.504491   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504508   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.504762   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.504781   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.504790   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504802   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.504875   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.504890   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.504900   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.504904   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504916   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.505030   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.505034   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.505041   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.505114   29946 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 19:25:36.505136   29946 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 19:25:36.505210   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.505215   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.505222   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.505242   29946 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0919 19:25:36.505249   29946 round_trippers.go:469] Request Headers:
	I0919 19:25:36.505260   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:25:36.505265   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:25:36.515769   29946 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0919 19:25:36.516537   29946 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0919 19:25:36.516554   29946 round_trippers.go:469] Request Headers:
	I0919 19:25:36.516565   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:25:36.516572   29946 round_trippers.go:473]     Content-Type: application/json
	I0919 19:25:36.516581   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:25:36.519463   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:25:36.519632   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.519650   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.519937   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.519949   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.519960   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.522604   29946 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0919 19:25:36.523991   29946 addons.go:510] duration metric: took 724.482922ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 19:25:36.524039   29946 start.go:246] waiting for cluster config update ...
	I0919 19:25:36.524053   29946 start.go:255] writing updated cluster config ...
	I0919 19:25:36.525729   29946 out.go:201] 
	I0919 19:25:36.527177   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:36.527269   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:36.528940   29946 out.go:177] * Starting "ha-076992-m02" control-plane node in "ha-076992" cluster
	I0919 19:25:36.530205   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:25:36.530230   29946 cache.go:56] Caching tarball of preloaded images
	I0919 19:25:36.530345   29946 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:25:36.530360   29946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:25:36.530451   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:36.530647   29946 start.go:360] acquireMachinesLock for ha-076992-m02: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:25:36.530701   29946 start.go:364] duration metric: took 30.765µs to acquireMachinesLock for "ha-076992-m02"
	I0919 19:25:36.530723   29946 start.go:93] Provisioning new machine with config: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:25:36.530820   29946 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0919 19:25:36.532606   29946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 19:25:36.532678   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:36.532710   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:36.547137   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0919 19:25:36.547545   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:36.547997   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:36.548015   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:36.548367   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:36.548567   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:36.548746   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:36.548944   29946 start.go:159] libmachine.API.Create for "ha-076992" (driver="kvm2")
	I0919 19:25:36.548973   29946 client.go:168] LocalClient.Create starting
	I0919 19:25:36.549008   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem
	I0919 19:25:36.549050   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:25:36.549087   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:25:36.549192   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem
	I0919 19:25:36.549240   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:25:36.549257   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:25:36.549297   29946 main.go:141] libmachine: Running pre-create checks...
	I0919 19:25:36.549316   29946 main.go:141] libmachine: (ha-076992-m02) Calling .PreCreateCheck
	I0919 19:25:36.549515   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetConfigRaw
	I0919 19:25:36.549909   29946 main.go:141] libmachine: Creating machine...
	I0919 19:25:36.549924   29946 main.go:141] libmachine: (ha-076992-m02) Calling .Create
	I0919 19:25:36.550052   29946 main.go:141] libmachine: (ha-076992-m02) Creating KVM machine...
	I0919 19:25:36.551192   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found existing default KVM network
	I0919 19:25:36.551300   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found existing private KVM network mk-ha-076992
	I0919 19:25:36.551429   29946 main.go:141] libmachine: (ha-076992-m02) Setting up store path in /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02 ...
	I0919 19:25:36.551455   29946 main.go:141] libmachine: (ha-076992-m02) Building disk image from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 19:25:36.551523   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.551412   30305 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:25:36.551615   29946 main.go:141] libmachine: (ha-076992-m02) Downloading /home/jenkins/minikube-integration/19664-7917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0919 19:25:36.777277   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.777143   30305 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa...
	I0919 19:25:36.934632   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.934510   30305 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/ha-076992-m02.rawdisk...
	I0919 19:25:36.934655   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Writing magic tar header
	I0919 19:25:36.934666   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Writing SSH key tar header
	I0919 19:25:36.934677   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.934643   30305 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02 ...
	I0919 19:25:36.934732   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02
	I0919 19:25:36.934753   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines
	I0919 19:25:36.934762   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02 (perms=drwx------)
	I0919 19:25:36.934775   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:25:36.934789   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917
	I0919 19:25:36.934801   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 19:25:36.934811   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins
	I0919 19:25:36.934821   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines (perms=drwxr-xr-x)
	I0919 19:25:36.934826   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home
	I0919 19:25:36.934834   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Skipping /home - not owner
	I0919 19:25:36.934842   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube (perms=drwxr-xr-x)
	I0919 19:25:36.934852   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917 (perms=drwxrwxr-x)
	I0919 19:25:36.934866   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 19:25:36.934884   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 19:25:36.934911   29946 main.go:141] libmachine: (ha-076992-m02) Creating domain...
	I0919 19:25:36.935720   29946 main.go:141] libmachine: (ha-076992-m02) define libvirt domain using xml: 
	I0919 19:25:36.935740   29946 main.go:141] libmachine: (ha-076992-m02) <domain type='kvm'>
	I0919 19:25:36.935750   29946 main.go:141] libmachine: (ha-076992-m02)   <name>ha-076992-m02</name>
	I0919 19:25:36.935757   29946 main.go:141] libmachine: (ha-076992-m02)   <memory unit='MiB'>2200</memory>
	I0919 19:25:36.935765   29946 main.go:141] libmachine: (ha-076992-m02)   <vcpu>2</vcpu>
	I0919 19:25:36.935775   29946 main.go:141] libmachine: (ha-076992-m02)   <features>
	I0919 19:25:36.935783   29946 main.go:141] libmachine: (ha-076992-m02)     <acpi/>
	I0919 19:25:36.935792   29946 main.go:141] libmachine: (ha-076992-m02)     <apic/>
	I0919 19:25:36.935799   29946 main.go:141] libmachine: (ha-076992-m02)     <pae/>
	I0919 19:25:36.935808   29946 main.go:141] libmachine: (ha-076992-m02)     
	I0919 19:25:36.935823   29946 main.go:141] libmachine: (ha-076992-m02)   </features>
	I0919 19:25:36.935834   29946 main.go:141] libmachine: (ha-076992-m02)   <cpu mode='host-passthrough'>
	I0919 19:25:36.935839   29946 main.go:141] libmachine: (ha-076992-m02)   
	I0919 19:25:36.935844   29946 main.go:141] libmachine: (ha-076992-m02)   </cpu>
	I0919 19:25:36.935849   29946 main.go:141] libmachine: (ha-076992-m02)   <os>
	I0919 19:25:36.935856   29946 main.go:141] libmachine: (ha-076992-m02)     <type>hvm</type>
	I0919 19:25:36.935861   29946 main.go:141] libmachine: (ha-076992-m02)     <boot dev='cdrom'/>
	I0919 19:25:36.935865   29946 main.go:141] libmachine: (ha-076992-m02)     <boot dev='hd'/>
	I0919 19:25:36.935876   29946 main.go:141] libmachine: (ha-076992-m02)     <bootmenu enable='no'/>
	I0919 19:25:36.935883   29946 main.go:141] libmachine: (ha-076992-m02)   </os>
	I0919 19:25:36.935888   29946 main.go:141] libmachine: (ha-076992-m02)   <devices>
	I0919 19:25:36.935893   29946 main.go:141] libmachine: (ha-076992-m02)     <disk type='file' device='cdrom'>
	I0919 19:25:36.935901   29946 main.go:141] libmachine: (ha-076992-m02)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/boot2docker.iso'/>
	I0919 19:25:36.935911   29946 main.go:141] libmachine: (ha-076992-m02)       <target dev='hdc' bus='scsi'/>
	I0919 19:25:36.935916   29946 main.go:141] libmachine: (ha-076992-m02)       <readonly/>
	I0919 19:25:36.935923   29946 main.go:141] libmachine: (ha-076992-m02)     </disk>
	I0919 19:25:36.935931   29946 main.go:141] libmachine: (ha-076992-m02)     <disk type='file' device='disk'>
	I0919 19:25:36.935939   29946 main.go:141] libmachine: (ha-076992-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 19:25:36.935946   29946 main.go:141] libmachine: (ha-076992-m02)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/ha-076992-m02.rawdisk'/>
	I0919 19:25:36.935951   29946 main.go:141] libmachine: (ha-076992-m02)       <target dev='hda' bus='virtio'/>
	I0919 19:25:36.935958   29946 main.go:141] libmachine: (ha-076992-m02)     </disk>
	I0919 19:25:36.935962   29946 main.go:141] libmachine: (ha-076992-m02)     <interface type='network'>
	I0919 19:25:36.935970   29946 main.go:141] libmachine: (ha-076992-m02)       <source network='mk-ha-076992'/>
	I0919 19:25:36.935974   29946 main.go:141] libmachine: (ha-076992-m02)       <model type='virtio'/>
	I0919 19:25:36.935980   29946 main.go:141] libmachine: (ha-076992-m02)     </interface>
	I0919 19:25:36.935987   29946 main.go:141] libmachine: (ha-076992-m02)     <interface type='network'>
	I0919 19:25:36.935994   29946 main.go:141] libmachine: (ha-076992-m02)       <source network='default'/>
	I0919 19:25:36.935999   29946 main.go:141] libmachine: (ha-076992-m02)       <model type='virtio'/>
	I0919 19:25:36.936006   29946 main.go:141] libmachine: (ha-076992-m02)     </interface>
	I0919 19:25:36.936010   29946 main.go:141] libmachine: (ha-076992-m02)     <serial type='pty'>
	I0919 19:25:36.936015   29946 main.go:141] libmachine: (ha-076992-m02)       <target port='0'/>
	I0919 19:25:36.936021   29946 main.go:141] libmachine: (ha-076992-m02)     </serial>
	I0919 19:25:36.936026   29946 main.go:141] libmachine: (ha-076992-m02)     <console type='pty'>
	I0919 19:25:36.936033   29946 main.go:141] libmachine: (ha-076992-m02)       <target type='serial' port='0'/>
	I0919 19:25:36.936037   29946 main.go:141] libmachine: (ha-076992-m02)     </console>
	I0919 19:25:36.936041   29946 main.go:141] libmachine: (ha-076992-m02)     <rng model='virtio'>
	I0919 19:25:36.936048   29946 main.go:141] libmachine: (ha-076992-m02)       <backend model='random'>/dev/random</backend>
	I0919 19:25:36.936052   29946 main.go:141] libmachine: (ha-076992-m02)     </rng>
	I0919 19:25:36.936057   29946 main.go:141] libmachine: (ha-076992-m02)     
	I0919 19:25:36.936065   29946 main.go:141] libmachine: (ha-076992-m02)     
	I0919 19:25:36.936070   29946 main.go:141] libmachine: (ha-076992-m02)   </devices>
	I0919 19:25:36.936080   29946 main.go:141] libmachine: (ha-076992-m02) </domain>
	I0919 19:25:36.936086   29946 main.go:141] libmachine: (ha-076992-m02) 
	I0919 19:25:36.942900   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:0e:87:b8 in network default
	I0919 19:25:36.943479   29946 main.go:141] libmachine: (ha-076992-m02) Ensuring networks are active...
	I0919 19:25:36.943509   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:36.944120   29946 main.go:141] libmachine: (ha-076992-m02) Ensuring network default is active
	I0919 19:25:36.944391   29946 main.go:141] libmachine: (ha-076992-m02) Ensuring network mk-ha-076992 is active
	I0919 19:25:36.944707   29946 main.go:141] libmachine: (ha-076992-m02) Getting domain xml...
	I0919 19:25:36.945497   29946 main.go:141] libmachine: (ha-076992-m02) Creating domain...
	I0919 19:25:38.180680   29946 main.go:141] libmachine: (ha-076992-m02) Waiting to get IP...
	I0919 19:25:38.181469   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:38.181903   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:38.181932   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:38.181877   30305 retry.go:31] will retry after 244.203763ms: waiting for machine to come up
	I0919 19:25:38.427374   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:38.427795   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:38.427822   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:38.427757   30305 retry.go:31] will retry after 281.507755ms: waiting for machine to come up
	I0919 19:25:38.711466   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:38.711935   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:38.711962   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:38.711890   30305 retry.go:31] will retry after 465.962788ms: waiting for machine to come up
	I0919 19:25:39.179211   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:39.179652   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:39.179684   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:39.179602   30305 retry.go:31] will retry after 602.174018ms: waiting for machine to come up
	I0919 19:25:39.783380   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:39.783868   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:39.783897   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:39.783820   30305 retry.go:31] will retry after 752.65735ms: waiting for machine to come up
	I0919 19:25:40.537821   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:40.538325   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:40.538351   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:40.538278   30305 retry.go:31] will retry after 659.774912ms: waiting for machine to come up
	I0919 19:25:41.200055   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:41.200443   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:41.200472   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:41.200416   30305 retry.go:31] will retry after 933.838274ms: waiting for machine to come up
	I0919 19:25:42.135781   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:42.136230   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:42.136260   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:42.136180   30305 retry.go:31] will retry after 1.469374699s: waiting for machine to come up
	I0919 19:25:43.606700   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:43.607102   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:43.607128   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:43.607064   30305 retry.go:31] will retry after 1.652950342s: waiting for machine to come up
	I0919 19:25:45.261341   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:45.261788   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:45.261815   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:45.261744   30305 retry.go:31] will retry after 1.905868131s: waiting for machine to come up
	I0919 19:25:47.169717   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:47.170193   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:47.170220   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:47.170129   30305 retry.go:31] will retry after 2.065748875s: waiting for machine to come up
	I0919 19:25:49.238320   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:49.238667   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:49.238694   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:49.238621   30305 retry.go:31] will retry after 2.815922548s: waiting for machine to come up
	I0919 19:25:52.055810   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:52.056201   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:52.056225   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:52.056152   30305 retry.go:31] will retry after 2.765202997s: waiting for machine to come up
	I0919 19:25:54.825094   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:54.825576   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:54.825607   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:54.825532   30305 retry.go:31] will retry after 3.746769052s: waiting for machine to come up
	I0919 19:25:58.574430   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.574995   29946 main.go:141] libmachine: (ha-076992-m02) Found IP for machine: 192.168.39.232
	I0919 19:25:58.575023   29946 main.go:141] libmachine: (ha-076992-m02) Reserving static IP address...
	I0919 19:25:58.575036   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has current primary IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.575526   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find host DHCP lease matching {name: "ha-076992-m02", mac: "52:54:00:5f:39:42", ip: "192.168.39.232"} in network mk-ha-076992
	I0919 19:25:58.646823   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Getting to WaitForSSH function...
	I0919 19:25:58.646849   29946 main.go:141] libmachine: (ha-076992-m02) Reserved static IP address: 192.168.39.232
	I0919 19:25:58.646862   29946 main.go:141] libmachine: (ha-076992-m02) Waiting for SSH to be available...
	I0919 19:25:58.649682   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.650123   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:58.650200   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.650328   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Using SSH client type: external
	I0919 19:25:58.650350   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa (-rw-------)
	I0919 19:25:58.650383   29946 main.go:141] libmachine: (ha-076992-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 19:25:58.650401   29946 main.go:141] libmachine: (ha-076992-m02) DBG | About to run SSH command:
	I0919 19:25:58.650416   29946 main.go:141] libmachine: (ha-076992-m02) DBG | exit 0
	I0919 19:25:58.777771   29946 main.go:141] libmachine: (ha-076992-m02) DBG | SSH cmd err, output: <nil>: 
	I0919 19:25:58.778064   29946 main.go:141] libmachine: (ha-076992-m02) KVM machine creation complete!
	I0919 19:25:58.778379   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetConfigRaw
	I0919 19:25:58.778927   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:58.779131   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:58.779306   29946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 19:25:58.779329   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetState
	I0919 19:25:58.780634   29946 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 19:25:58.780650   29946 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 19:25:58.780657   29946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 19:25:58.780663   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:58.783144   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.783573   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:58.783595   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.783851   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:58.784010   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.784179   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.784350   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:58.784515   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:58.784730   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:58.784742   29946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 19:25:58.888256   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:58.888282   29946 main.go:141] libmachine: Detecting the provisioner...
	I0919 19:25:58.888293   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:58.891062   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.891412   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:58.891443   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.891627   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:58.891808   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.891961   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.892118   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:58.892285   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:58.892465   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:58.892476   29946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 19:25:58.997853   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0919 19:25:58.997904   29946 main.go:141] libmachine: found compatible host: buildroot
	I0919 19:25:58.997917   29946 main.go:141] libmachine: Provisioning with buildroot...
	I0919 19:25:58.997926   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:58.998154   29946 buildroot.go:166] provisioning hostname "ha-076992-m02"
	I0919 19:25:58.998180   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:58.998363   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.001218   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.001600   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.001625   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.001769   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.001924   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.002057   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.002199   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.002363   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.002512   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.002523   29946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992-m02 && echo "ha-076992-m02" | sudo tee /etc/hostname
	I0919 19:25:59.119914   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992-m02
	
	I0919 19:25:59.119943   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.122597   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.122932   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.122959   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.123102   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.123288   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.123386   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.123535   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.123663   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.123816   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.123831   29946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:25:59.234249   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:59.234283   29946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:25:59.234304   29946 buildroot.go:174] setting up certificates
	I0919 19:25:59.234313   29946 provision.go:84] configureAuth start
	I0919 19:25:59.234321   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:59.234593   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:25:59.237517   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.237906   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.237938   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.238086   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.240541   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.240911   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.240937   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.241052   29946 provision.go:143] copyHostCerts
	I0919 19:25:59.241116   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:59.241157   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:25:59.241168   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:59.241245   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:25:59.241332   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:59.241361   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:25:59.241371   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:59.241408   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:25:59.241468   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:59.241492   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:25:59.241501   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:59.241533   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:25:59.241596   29946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992-m02 san=[127.0.0.1 192.168.39.232 ha-076992-m02 localhost minikube]
	I0919 19:25:59.357826   29946 provision.go:177] copyRemoteCerts
	I0919 19:25:59.357894   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:25:59.357924   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.360530   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.360884   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.360911   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.361149   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.361317   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.361482   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.361595   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:25:59.443240   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:25:59.443310   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:25:59.469433   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:25:59.469519   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 19:25:59.495952   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:25:59.496024   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:25:59.522724   29946 provision.go:87] duration metric: took 288.400561ms to configureAuth
	I0919 19:25:59.522748   29946 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:25:59.522917   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:59.522985   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.525520   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.525889   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.525912   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.526077   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.526238   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.526387   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.526517   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.526656   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.526814   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.526826   29946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:25:59.752869   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:25:59.752893   29946 main.go:141] libmachine: Checking connection to Docker...
	I0919 19:25:59.752905   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetURL
	I0919 19:25:59.754292   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Using libvirt version 6000000
	I0919 19:25:59.756429   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.756753   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.756775   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.756952   29946 main.go:141] libmachine: Docker is up and running!
	I0919 19:25:59.756967   29946 main.go:141] libmachine: Reticulating splines...
	I0919 19:25:59.756974   29946 client.go:171] duration metric: took 23.20799249s to LocalClient.Create
	I0919 19:25:59.756996   29946 start.go:167] duration metric: took 23.208049551s to libmachine.API.Create "ha-076992"
	I0919 19:25:59.757009   29946 start.go:293] postStartSetup for "ha-076992-m02" (driver="kvm2")
	I0919 19:25:59.757026   29946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:25:59.757049   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:59.757304   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:25:59.757329   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.759641   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.760058   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.760084   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.760219   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.760398   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.760511   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.760656   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:25:59.843621   29946 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:25:59.848206   29946 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:25:59.848232   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:25:59.848296   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:25:59.848392   29946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:25:59.848404   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:25:59.848515   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:25:59.858316   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:25:59.885251   29946 start.go:296] duration metric: took 128.22453ms for postStartSetup
	I0919 19:25:59.885295   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetConfigRaw
	I0919 19:25:59.885821   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:25:59.888318   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.888680   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.888708   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.888945   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:59.889154   29946 start.go:128] duration metric: took 23.358320855s to createHost
	I0919 19:25:59.889176   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.891311   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.891643   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.891660   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.891792   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.891944   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.892068   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.892176   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.892294   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.892443   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.892452   29946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:26:00.002053   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726773959.961389731
	
	I0919 19:26:00.002074   29946 fix.go:216] guest clock: 1726773959.961389731
	I0919 19:26:00.002082   29946 fix.go:229] Guest: 2024-09-19 19:25:59.961389731 +0000 UTC Remote: 2024-09-19 19:25:59.889165721 +0000 UTC m=+69.375202371 (delta=72.22401ms)
	I0919 19:26:00.002098   29946 fix.go:200] guest clock delta is within tolerance: 72.22401ms
	I0919 19:26:00.002103   29946 start.go:83] releasing machines lock for "ha-076992-m02", held for 23.47139118s
	I0919 19:26:00.002120   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.002405   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:26:00.005381   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.005748   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:00.005768   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.008103   29946 out.go:177] * Found network options:
	I0919 19:26:00.009556   29946 out.go:177]   - NO_PROXY=192.168.39.173
	W0919 19:26:00.010768   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:26:00.010799   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.011365   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.011545   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.011641   29946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:26:00.011680   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	W0919 19:26:00.011835   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:26:00.011913   29946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:26:00.011935   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:26:00.014635   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.014741   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.015053   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:00.015078   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.015105   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:00.015122   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.015192   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:26:00.015389   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:26:00.015425   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:26:00.015551   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:26:00.015586   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:26:00.015680   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:26:00.015686   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:26:00.015847   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:26:00.243733   29946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:26:00.250260   29946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:26:00.250318   29946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:26:00.266157   29946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 19:26:00.266187   29946 start.go:495] detecting cgroup driver to use...
	I0919 19:26:00.266257   29946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:26:00.284373   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:26:00.299098   29946 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:26:00.299161   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:26:00.313776   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:26:00.328144   29946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:26:00.450118   29946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:26:00.592879   29946 docker.go:233] disabling docker service ...
	I0919 19:26:00.592942   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:26:00.607656   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:26:00.620367   29946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:26:00.756551   29946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:26:00.888081   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:26:00.901911   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:26:00.920807   29946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:26:00.920876   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.931652   29946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:26:00.931715   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.944741   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.955512   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.966422   29946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:26:00.977466   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.988029   29946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:01.011140   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:01.022261   29946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:26:01.031891   29946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 19:26:01.031944   29946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 19:26:01.044785   29946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:26:01.054444   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:26:01.182828   29946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:26:01.272829   29946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:26:01.272907   29946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:26:01.277937   29946 start.go:563] Will wait 60s for crictl version
	I0919 19:26:01.277997   29946 ssh_runner.go:195] Run: which crictl
	I0919 19:26:01.282022   29946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:26:01.321749   29946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:26:01.321825   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:26:01.350681   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:26:01.380754   29946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:26:01.382497   29946 out.go:177]   - env NO_PROXY=192.168.39.173
	I0919 19:26:01.383753   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:26:01.386332   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:01.386661   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:01.386690   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:01.386880   29946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:26:01.391190   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:26:01.403767   29946 mustload.go:65] Loading cluster: ha-076992
	I0919 19:26:01.403960   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:26:01.404199   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:01.404248   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:01.418919   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0919 19:26:01.419393   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:01.419861   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:01.419882   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:01.420168   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:01.420331   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:26:01.421875   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:26:01.422160   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:01.422195   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:01.437017   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0919 19:26:01.437468   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:01.437893   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:01.437915   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:01.438300   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:01.438497   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:26:01.438639   29946 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.232
	I0919 19:26:01.438648   29946 certs.go:194] generating shared ca certs ...
	I0919 19:26:01.438661   29946 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:26:01.438777   29946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:26:01.438815   29946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:26:01.438824   29946 certs.go:256] generating profile certs ...
	I0919 19:26:01.438904   29946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:26:01.438934   29946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548
	I0919 19:26:01.438954   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.232 192.168.39.254]
	I0919 19:26:01.570629   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548 ...
	I0919 19:26:01.570661   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548: {Name:mk20c396761e9ccfefb28b7b4e5db83bbd0de404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:26:01.570827   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548 ...
	I0919 19:26:01.570840   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548: {Name:mkbba11c725a3524e5cbb6109330222760dc216a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:26:01.570911   29946 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:26:01.571040   29946 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:26:01.571164   29946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:26:01.571178   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:26:01.571191   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:26:01.571239   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:26:01.571263   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:26:01.571276   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:26:01.571286   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:26:01.571298   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:26:01.571308   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:26:01.571356   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:26:01.571390   29946 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:26:01.571399   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:26:01.571419   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:26:01.571441   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:26:01.571462   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:26:01.571500   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:26:01.571524   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:26:01.571538   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:01.571552   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:26:01.571582   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:26:01.574554   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:01.574961   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:26:01.574989   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:01.575190   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:26:01.575379   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:26:01.575503   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:26:01.575643   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:26:01.649555   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 19:26:01.654610   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 19:26:01.666818   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 19:26:01.670813   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 19:26:01.681979   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 19:26:01.686362   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 19:26:01.696685   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 19:26:01.700738   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 19:26:01.711578   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 19:26:01.715684   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 19:26:01.727402   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 19:26:01.731821   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 19:26:01.743441   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:26:01.772076   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:26:01.796535   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:26:01.821191   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:26:01.847148   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 19:26:01.871474   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 19:26:01.894939   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:26:01.918215   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:26:01.943385   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:26:01.968566   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:26:01.992928   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:26:02.017141   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 19:26:02.033989   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 19:26:02.051070   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 19:26:02.067651   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 19:26:02.084618   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 19:26:02.100924   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 19:26:02.117332   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 19:26:02.133574   29946 ssh_runner.go:195] Run: openssl version
	I0919 19:26:02.139079   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:26:02.149396   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:26:02.153709   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:26:02.153753   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:26:02.159372   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:26:02.169469   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:26:02.179773   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:26:02.184096   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:26:02.184140   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:26:02.189599   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:26:02.199935   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:26:02.210371   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:02.214711   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:02.214755   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:02.220241   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:26:02.230545   29946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:26:02.234717   29946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 19:26:02.234762   29946 kubeadm.go:934] updating node {m02 192.168.39.232 8443 v1.31.1 crio true true} ...
	I0919 19:26:02.234833   29946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:26:02.234855   29946 kube-vip.go:115] generating kube-vip config ...
	I0919 19:26:02.234882   29946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:26:02.250138   29946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:26:02.250208   29946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:26:02.250263   29946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:26:02.260294   29946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0919 19:26:02.260356   29946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0919 19:26:02.271123   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0919 19:26:02.271155   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:26:02.271170   29946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0919 19:26:02.271131   29946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0919 19:26:02.271252   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:26:02.275907   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0919 19:26:02.275932   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0919 19:26:04.726131   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:26:04.741861   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:26:04.741942   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:26:04.747080   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0919 19:26:04.747110   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0919 19:26:05.138782   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:26:05.138864   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:26:05.143906   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0919 19:26:05.143942   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0919 19:26:05.391094   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 19:26:05.402470   29946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 19:26:05.419083   29946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:26:05.435530   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:26:05.452330   29946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:26:05.456142   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:26:05.468600   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:26:05.590348   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:26:05.607783   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:26:05.608143   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:05.608190   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:05.622922   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I0919 19:26:05.623374   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:05.623806   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:05.623826   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:05.624115   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:05.624311   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:26:05.624422   29946 start.go:317] joinCluster: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:26:05.624512   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 19:26:05.624535   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:26:05.627671   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:05.628201   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:26:05.628231   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:05.628426   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:26:05.628584   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:26:05.628775   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:26:05.628963   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:26:05.783004   29946 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:26:05.783062   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k2rxz4.c60ygnjp1ja274y0 --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m02 --control-plane --apiserver-advertise-address=192.168.39.232 --apiserver-bind-port=8443"
	I0919 19:26:26.852036   29946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k2rxz4.c60ygnjp1ja274y0 --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m02 --control-plane --apiserver-advertise-address=192.168.39.232 --apiserver-bind-port=8443": (21.068945229s)
	I0919 19:26:26.852075   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 19:26:27.433951   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076992-m02 minikube.k8s.io/updated_at=2024_09_19T19_26_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=ha-076992 minikube.k8s.io/primary=false
	I0919 19:26:27.570431   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-076992-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 19:26:27.685911   29946 start.go:319] duration metric: took 22.061483301s to joinCluster
	I0919 19:26:27.685989   29946 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:26:27.686288   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:26:27.687539   29946 out.go:177] * Verifying Kubernetes components...
	I0919 19:26:27.689112   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:26:27.988894   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:26:28.006672   29946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:26:28.006924   29946 kapi.go:59] client config for ha-076992: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 19:26:28.006987   29946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.173:8443
	I0919 19:26:28.007186   29946 node_ready.go:35] waiting up to 6m0s for node "ha-076992-m02" to be "Ready" ...
	I0919 19:26:28.007293   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:28.007303   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:28.007314   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:28.007319   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:28.016756   29946 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0919 19:26:28.508333   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:28.508360   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:28.508372   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:28.508378   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:28.516049   29946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0919 19:26:29.007871   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:29.007898   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:29.007909   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:29.007913   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:29.011642   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:29.507413   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:29.507439   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:29.507447   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:29.507452   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:29.511660   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:30.007557   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:30.007578   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:30.007586   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:30.007591   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:30.011038   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:30.011598   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:30.508074   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:30.508099   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:30.508109   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:30.508112   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:30.511669   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:31.007638   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:31.007657   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:31.007665   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:31.007669   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:31.011418   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:31.507577   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:31.507605   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:31.507615   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:31.507626   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:31.511375   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:32.007718   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:32.007740   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:32.007749   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:32.007756   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:32.011650   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:32.012415   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:32.507637   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:32.507664   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:32.507676   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:32.507683   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:32.511755   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:33.008213   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:33.008234   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:33.008242   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:33.008246   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:33.011792   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:33.507684   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:33.507712   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:33.507720   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:33.507725   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:33.511853   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:34.007466   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:34.007488   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:34.007496   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:34.007500   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:34.012044   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:34.013001   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:34.508399   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:34.508419   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:34.508429   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:34.508434   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:34.512448   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:35.007796   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:35.007816   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:35.007824   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:35.007827   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:35.011062   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:35.508040   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:35.508073   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:35.508085   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:35.508091   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:35.511620   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:36.008049   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:36.008071   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:36.008079   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:36.008083   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:36.011403   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:36.508302   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:36.508324   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:36.508332   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:36.508337   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:36.511571   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:36.512300   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:37.007542   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:37.007564   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:37.007575   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:37.007582   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:37.011805   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:37.508050   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:37.508072   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:37.508080   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:37.508085   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:37.511538   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:38.007485   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:38.007511   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:38.007521   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:38.007533   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:38.011022   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:38.508063   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:38.508084   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:38.508092   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:38.508096   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:38.511492   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:39.008426   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:39.008451   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:39.008461   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:39.008467   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:39.012681   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:39.013788   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:39.508128   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:39.508151   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:39.508160   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:39.508165   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:39.512449   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:40.008306   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:40.008329   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:40.008337   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:40.008340   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:40.011906   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:40.508039   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:40.508061   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:40.508069   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:40.508074   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:40.511457   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:41.007677   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:41.007700   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:41.007709   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:41.007714   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:41.011506   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:41.507543   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:41.507564   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:41.507572   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:41.507578   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:41.510792   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:41.511569   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:42.008395   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:42.008418   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:42.008426   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:42.008430   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:42.011477   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:42.507458   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:42.507479   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:42.507487   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:42.507490   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:42.510874   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:43.008232   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:43.008255   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:43.008263   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:43.008266   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:43.011709   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:43.507746   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:43.507769   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:43.507778   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:43.507783   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:43.511265   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:43.511790   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:44.008252   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:44.008274   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:44.008284   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:44.008290   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:44.011544   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:44.507848   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:44.507875   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:44.507888   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:44.507894   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:44.510925   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:45.007953   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:45.007975   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:45.007983   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:45.007987   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:45.012020   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:45.508267   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:45.508293   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:45.508302   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:45.508309   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:45.512037   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:45.512623   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:46.008137   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.008158   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.008165   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.008169   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.012104   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.012731   29946 node_ready.go:49] node "ha-076992-m02" has status "Ready":"True"
	I0919 19:26:46.012750   29946 node_ready.go:38] duration metric: took 18.005542928s for node "ha-076992-m02" to be "Ready" ...
	I0919 19:26:46.012759   29946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:26:46.012828   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:46.012838   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.012845   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.012851   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.017898   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:26:46.023994   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.024066   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bst8x
	I0919 19:26:46.024075   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.024083   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.024087   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.027015   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.027716   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.027731   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.027738   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.027742   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.030392   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.030831   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.030846   29946 pod_ready.go:82] duration metric: took 6.831386ms for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.030853   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.030893   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nbds4
	I0919 19:26:46.030900   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.030907   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.030911   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.033599   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.034104   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.034116   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.034122   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.034125   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.036185   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.036561   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.036576   29946 pod_ready.go:82] duration metric: took 5.717406ms for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.036584   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.036632   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992
	I0919 19:26:46.036642   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.036649   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.036654   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.038980   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.039515   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.039526   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.039532   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.039535   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.041804   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.042161   29946 pod_ready.go:93] pod "etcd-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.042174   29946 pod_ready.go:82] duration metric: took 5.5845ms for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.042181   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.042226   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m02
	I0919 19:26:46.042236   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.042242   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.042247   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.044464   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.045049   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.045081   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.045091   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.045095   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.047141   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.047566   29946 pod_ready.go:93] pod "etcd-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.047579   29946 pod_ready.go:82] duration metric: took 5.393087ms for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.047590   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.208948   29946 request.go:632] Waited for 161.306549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:26:46.209021   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:26:46.209027   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.209035   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.209041   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.212646   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.408764   29946 request.go:632] Waited for 195.355169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.408850   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.408861   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.408869   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.408878   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.412302   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.412793   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.412809   29946 pod_ready.go:82] duration metric: took 365.213979ms for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.412818   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.609130   29946 request.go:632] Waited for 196.247315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:26:46.609190   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:26:46.609195   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.609203   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.609205   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.612762   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.808777   29946 request.go:632] Waited for 195.389035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.808839   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.808844   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.808851   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.808854   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.812076   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.812671   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.812690   29946 pod_ready.go:82] duration metric: took 399.865629ms for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.812701   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.008865   29946 request.go:632] Waited for 196.089609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:26:47.008926   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:26:47.008931   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.008940   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.008944   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.012069   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:47.208226   29946 request.go:632] Waited for 195.285225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:47.208310   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:47.208321   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.208333   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.208340   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.211658   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:47.212273   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:47.212334   29946 pod_ready.go:82] duration metric: took 399.616733ms for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.212376   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.408402   29946 request.go:632] Waited for 195.932577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:26:47.408471   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:26:47.408476   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.408483   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.408488   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.412589   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:47.608602   29946 request.go:632] Waited for 195.361457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:47.608664   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:47.608670   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.608677   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.608683   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.611901   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:47.612434   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:47.612461   29946 pod_ready.go:82] duration metric: took 400.073222ms for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.612471   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.808579   29946 request.go:632] Waited for 196.032947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:26:47.808639   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:26:47.808647   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.808656   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.808663   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.811981   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.009006   29946 request.go:632] Waited for 196.338909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.009055   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.009072   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.009080   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.009088   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.012721   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.013205   29946 pod_ready.go:93] pod "kube-proxy-4d8dc" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:48.013223   29946 pod_ready.go:82] duration metric: took 400.743363ms for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.013233   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.208239   29946 request.go:632] Waited for 194.931072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:26:48.208327   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:26:48.208336   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.208357   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.208367   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.211846   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.408960   29946 request.go:632] Waited for 196.372524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:48.409013   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:48.409018   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.409025   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.409030   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.412044   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:48.412602   29946 pod_ready.go:93] pod "kube-proxy-tjtfj" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:48.412619   29946 pod_ready.go:82] duration metric: took 399.379304ms for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.412628   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.608768   29946 request.go:632] Waited for 196.067805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:26:48.608847   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:26:48.608853   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.608860   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.608867   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.612031   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.809050   29946 request.go:632] Waited for 196.389681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.809131   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.809137   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.809146   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.809149   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.812475   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.813104   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:48.813123   29946 pod_ready.go:82] duration metric: took 400.488766ms for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.813133   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:49.009203   29946 request.go:632] Waited for 196.009229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:26:49.009276   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:26:49.009288   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.009300   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.009312   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.013885   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:49.208739   29946 request.go:632] Waited for 194.357315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:49.208808   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:49.208813   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.208822   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.208826   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.212311   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:49.212795   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:49.212813   29946 pod_ready.go:82] duration metric: took 399.67345ms for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:49.212826   29946 pod_ready.go:39] duration metric: took 3.200055081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:26:49.212844   29946 api_server.go:52] waiting for apiserver process to appear ...
	I0919 19:26:49.212896   29946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:26:49.228541   29946 api_server.go:72] duration metric: took 21.542513425s to wait for apiserver process to appear ...
	I0919 19:26:49.228570   29946 api_server.go:88] waiting for apiserver healthz status ...
	I0919 19:26:49.228591   29946 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0919 19:26:49.232969   29946 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0919 19:26:49.233025   29946 round_trippers.go:463] GET https://192.168.39.173:8443/version
	I0919 19:26:49.233033   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.233040   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.233048   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.234012   29946 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0919 19:26:49.234106   29946 api_server.go:141] control plane version: v1.31.1
	I0919 19:26:49.234128   29946 api_server.go:131] duration metric: took 5.550093ms to wait for apiserver health ...
	I0919 19:26:49.234140   29946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 19:26:49.408598   29946 request.go:632] Waited for 174.396795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.408664   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.408670   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.408680   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.408697   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.414220   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:26:49.419326   29946 system_pods.go:59] 17 kube-system pods found
	I0919 19:26:49.419355   29946 system_pods.go:61] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:26:49.419366   29946 system_pods.go:61] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:26:49.419370   29946 system_pods.go:61] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:26:49.419374   29946 system_pods.go:61] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:26:49.419377   29946 system_pods.go:61] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:26:49.419380   29946 system_pods.go:61] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:26:49.419384   29946 system_pods.go:61] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:26:49.419389   29946 system_pods.go:61] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:26:49.419392   29946 system_pods.go:61] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:26:49.419395   29946 system_pods.go:61] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:26:49.419398   29946 system_pods.go:61] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:26:49.419402   29946 system_pods.go:61] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:26:49.419408   29946 system_pods.go:61] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:26:49.419411   29946 system_pods.go:61] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:26:49.419415   29946 system_pods.go:61] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:26:49.419421   29946 system_pods.go:61] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:26:49.419423   29946 system_pods.go:61] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:26:49.419429   29946 system_pods.go:74] duration metric: took 185.281302ms to wait for pod list to return data ...
	I0919 19:26:49.419438   29946 default_sa.go:34] waiting for default service account to be created ...
	I0919 19:26:49.608712   29946 request.go:632] Waited for 189.201717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:26:49.608795   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:26:49.608802   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.608809   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.608814   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.612612   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:49.612816   29946 default_sa.go:45] found service account: "default"
	I0919 19:26:49.612834   29946 default_sa.go:55] duration metric: took 193.38871ms for default service account to be created ...
	I0919 19:26:49.612845   29946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 19:26:49.808242   29946 request.go:632] Waited for 195.299973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.808306   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.808313   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.808327   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.808332   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.812812   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:49.816942   29946 system_pods.go:86] 17 kube-system pods found
	I0919 19:26:49.816968   29946 system_pods.go:89] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:26:49.816974   29946 system_pods.go:89] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:26:49.816978   29946 system_pods.go:89] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:26:49.816982   29946 system_pods.go:89] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:26:49.816987   29946 system_pods.go:89] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:26:49.816990   29946 system_pods.go:89] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:26:49.816994   29946 system_pods.go:89] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:26:49.816997   29946 system_pods.go:89] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:26:49.817001   29946 system_pods.go:89] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:26:49.817006   29946 system_pods.go:89] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:26:49.817009   29946 system_pods.go:89] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:26:49.817012   29946 system_pods.go:89] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:26:49.817015   29946 system_pods.go:89] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:26:49.817018   29946 system_pods.go:89] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:26:49.817022   29946 system_pods.go:89] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:26:49.817025   29946 system_pods.go:89] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:26:49.817027   29946 system_pods.go:89] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:26:49.817033   29946 system_pods.go:126] duration metric: took 204.182134ms to wait for k8s-apps to be running ...
	I0919 19:26:49.817042   29946 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 19:26:49.817110   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:26:49.832907   29946 system_svc.go:56] duration metric: took 15.854427ms WaitForService to wait for kubelet
	I0919 19:26:49.832937   29946 kubeadm.go:582] duration metric: took 22.146916375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:26:49.832959   29946 node_conditions.go:102] verifying NodePressure condition ...
	I0919 19:26:50.008290   29946 request.go:632] Waited for 175.255303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes
	I0919 19:26:50.008370   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes
	I0919 19:26:50.008377   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:50.008395   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:50.008412   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:50.012639   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:50.013536   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:26:50.013563   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:26:50.013575   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:26:50.013578   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:26:50.013583   29946 node_conditions.go:105] duration metric: took 180.618254ms to run NodePressure ...
	I0919 19:26:50.013609   29946 start.go:241] waiting for startup goroutines ...
	I0919 19:26:50.013645   29946 start.go:255] writing updated cluster config ...
	I0919 19:26:50.016260   29946 out.go:201] 
	I0919 19:26:50.017506   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:26:50.017610   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:26:50.019348   29946 out.go:177] * Starting "ha-076992-m03" control-plane node in "ha-076992" cluster
	I0919 19:26:50.020726   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:26:50.020750   29946 cache.go:56] Caching tarball of preloaded images
	I0919 19:26:50.020859   29946 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:26:50.020870   29946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:26:50.020951   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:26:50.021276   29946 start.go:360] acquireMachinesLock for ha-076992-m03: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:26:50.021320   29946 start.go:364] duration metric: took 25.515µs to acquireMachinesLock for "ha-076992-m03"
	I0919 19:26:50.021340   29946 start.go:93] Provisioning new machine with config: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:26:50.021447   29946 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0919 19:26:50.023219   29946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 19:26:50.023316   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:50.023350   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:50.038933   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0919 19:26:50.039419   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:50.039936   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:50.039958   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:50.040292   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:50.040458   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:26:50.040592   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:26:50.040729   29946 start.go:159] libmachine.API.Create for "ha-076992" (driver="kvm2")
	I0919 19:26:50.040757   29946 client.go:168] LocalClient.Create starting
	I0919 19:26:50.040790   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem
	I0919 19:26:50.040824   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:26:50.040838   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:26:50.040886   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem
	I0919 19:26:50.040904   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:26:50.040914   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:26:50.040933   29946 main.go:141] libmachine: Running pre-create checks...
	I0919 19:26:50.040941   29946 main.go:141] libmachine: (ha-076992-m03) Calling .PreCreateCheck
	I0919 19:26:50.041191   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetConfigRaw
	I0919 19:26:50.041557   29946 main.go:141] libmachine: Creating machine...
	I0919 19:26:50.041570   29946 main.go:141] libmachine: (ha-076992-m03) Calling .Create
	I0919 19:26:50.041718   29946 main.go:141] libmachine: (ha-076992-m03) Creating KVM machine...
	I0919 19:26:50.042959   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found existing default KVM network
	I0919 19:26:50.043089   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found existing private KVM network mk-ha-076992
	I0919 19:26:50.043212   29946 main.go:141] libmachine: (ha-076992-m03) Setting up store path in /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03 ...
	I0919 19:26:50.043237   29946 main.go:141] libmachine: (ha-076992-m03) Building disk image from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 19:26:50.043301   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.043202   30696 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:26:50.043388   29946 main.go:141] libmachine: (ha-076992-m03) Downloading /home/jenkins/minikube-integration/19664-7917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0919 19:26:50.272805   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.272669   30696 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa...
	I0919 19:26:50.366932   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.366796   30696 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/ha-076992-m03.rawdisk...
	I0919 19:26:50.366967   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Writing magic tar header
	I0919 19:26:50.366980   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Writing SSH key tar header
	I0919 19:26:50.366998   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.366905   30696 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03 ...
	I0919 19:26:50.367013   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03
	I0919 19:26:50.367090   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03 (perms=drwx------)
	I0919 19:26:50.367125   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines (perms=drwxr-xr-x)
	I0919 19:26:50.367136   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines
	I0919 19:26:50.367162   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube (perms=drwxr-xr-x)
	I0919 19:26:50.367182   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:26:50.367196   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917 (perms=drwxrwxr-x)
	I0919 19:26:50.367208   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917
	I0919 19:26:50.367220   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 19:26:50.367228   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins
	I0919 19:26:50.367240   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home
	I0919 19:26:50.367249   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Skipping /home - not owner
	I0919 19:26:50.367259   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 19:26:50.367272   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 19:26:50.367282   29946 main.go:141] libmachine: (ha-076992-m03) Creating domain...
	I0919 19:26:50.368245   29946 main.go:141] libmachine: (ha-076992-m03) define libvirt domain using xml: 
	I0919 19:26:50.368263   29946 main.go:141] libmachine: (ha-076992-m03) <domain type='kvm'>
	I0919 19:26:50.368270   29946 main.go:141] libmachine: (ha-076992-m03)   <name>ha-076992-m03</name>
	I0919 19:26:50.368275   29946 main.go:141] libmachine: (ha-076992-m03)   <memory unit='MiB'>2200</memory>
	I0919 19:26:50.368280   29946 main.go:141] libmachine: (ha-076992-m03)   <vcpu>2</vcpu>
	I0919 19:26:50.368287   29946 main.go:141] libmachine: (ha-076992-m03)   <features>
	I0919 19:26:50.368314   29946 main.go:141] libmachine: (ha-076992-m03)     <acpi/>
	I0919 19:26:50.368335   29946 main.go:141] libmachine: (ha-076992-m03)     <apic/>
	I0919 19:26:50.368360   29946 main.go:141] libmachine: (ha-076992-m03)     <pae/>
	I0919 19:26:50.368384   29946 main.go:141] libmachine: (ha-076992-m03)     
	I0919 19:26:50.368405   29946 main.go:141] libmachine: (ha-076992-m03)   </features>
	I0919 19:26:50.368416   29946 main.go:141] libmachine: (ha-076992-m03)   <cpu mode='host-passthrough'>
	I0919 19:26:50.368427   29946 main.go:141] libmachine: (ha-076992-m03)   
	I0919 19:26:50.368434   29946 main.go:141] libmachine: (ha-076992-m03)   </cpu>
	I0919 19:26:50.368446   29946 main.go:141] libmachine: (ha-076992-m03)   <os>
	I0919 19:26:50.368453   29946 main.go:141] libmachine: (ha-076992-m03)     <type>hvm</type>
	I0919 19:26:50.368468   29946 main.go:141] libmachine: (ha-076992-m03)     <boot dev='cdrom'/>
	I0919 19:26:50.368486   29946 main.go:141] libmachine: (ha-076992-m03)     <boot dev='hd'/>
	I0919 19:26:50.368498   29946 main.go:141] libmachine: (ha-076992-m03)     <bootmenu enable='no'/>
	I0919 19:26:50.368507   29946 main.go:141] libmachine: (ha-076992-m03)   </os>
	I0919 19:26:50.368515   29946 main.go:141] libmachine: (ha-076992-m03)   <devices>
	I0919 19:26:50.368519   29946 main.go:141] libmachine: (ha-076992-m03)     <disk type='file' device='cdrom'>
	I0919 19:26:50.368529   29946 main.go:141] libmachine: (ha-076992-m03)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/boot2docker.iso'/>
	I0919 19:26:50.368538   29946 main.go:141] libmachine: (ha-076992-m03)       <target dev='hdc' bus='scsi'/>
	I0919 19:26:50.368548   29946 main.go:141] libmachine: (ha-076992-m03)       <readonly/>
	I0919 19:26:50.368562   29946 main.go:141] libmachine: (ha-076992-m03)     </disk>
	I0919 19:26:50.368574   29946 main.go:141] libmachine: (ha-076992-m03)     <disk type='file' device='disk'>
	I0919 19:26:50.368585   29946 main.go:141] libmachine: (ha-076992-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 19:26:50.368595   29946 main.go:141] libmachine: (ha-076992-m03)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/ha-076992-m03.rawdisk'/>
	I0919 19:26:50.368602   29946 main.go:141] libmachine: (ha-076992-m03)       <target dev='hda' bus='virtio'/>
	I0919 19:26:50.368606   29946 main.go:141] libmachine: (ha-076992-m03)     </disk>
	I0919 19:26:50.368613   29946 main.go:141] libmachine: (ha-076992-m03)     <interface type='network'>
	I0919 19:26:50.368618   29946 main.go:141] libmachine: (ha-076992-m03)       <source network='mk-ha-076992'/>
	I0919 19:26:50.368625   29946 main.go:141] libmachine: (ha-076992-m03)       <model type='virtio'/>
	I0919 19:26:50.368637   29946 main.go:141] libmachine: (ha-076992-m03)     </interface>
	I0919 19:26:50.368648   29946 main.go:141] libmachine: (ha-076992-m03)     <interface type='network'>
	I0919 19:26:50.368657   29946 main.go:141] libmachine: (ha-076992-m03)       <source network='default'/>
	I0919 19:26:50.368666   29946 main.go:141] libmachine: (ha-076992-m03)       <model type='virtio'/>
	I0919 19:26:50.368678   29946 main.go:141] libmachine: (ha-076992-m03)     </interface>
	I0919 19:26:50.368688   29946 main.go:141] libmachine: (ha-076992-m03)     <serial type='pty'>
	I0919 19:26:50.368694   29946 main.go:141] libmachine: (ha-076992-m03)       <target port='0'/>
	I0919 19:26:50.368700   29946 main.go:141] libmachine: (ha-076992-m03)     </serial>
	I0919 19:26:50.368705   29946 main.go:141] libmachine: (ha-076992-m03)     <console type='pty'>
	I0919 19:26:50.368713   29946 main.go:141] libmachine: (ha-076992-m03)       <target type='serial' port='0'/>
	I0919 19:26:50.368722   29946 main.go:141] libmachine: (ha-076992-m03)     </console>
	I0919 19:26:50.368736   29946 main.go:141] libmachine: (ha-076992-m03)     <rng model='virtio'>
	I0919 19:26:50.368755   29946 main.go:141] libmachine: (ha-076992-m03)       <backend model='random'>/dev/random</backend>
	I0919 19:26:50.368772   29946 main.go:141] libmachine: (ha-076992-m03)     </rng>
	I0919 19:26:50.368781   29946 main.go:141] libmachine: (ha-076992-m03)     
	I0919 19:26:50.368790   29946 main.go:141] libmachine: (ha-076992-m03)     
	I0919 19:26:50.368799   29946 main.go:141] libmachine: (ha-076992-m03)   </devices>
	I0919 19:26:50.368809   29946 main.go:141] libmachine: (ha-076992-m03) </domain>
	I0919 19:26:50.368819   29946 main.go:141] libmachine: (ha-076992-m03) 
	I0919 19:26:50.375827   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:e1:f4:70 in network default
	I0919 19:26:50.376416   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:50.376447   29946 main.go:141] libmachine: (ha-076992-m03) Ensuring networks are active...
	I0919 19:26:50.377119   29946 main.go:141] libmachine: (ha-076992-m03) Ensuring network default is active
	I0919 19:26:50.377451   29946 main.go:141] libmachine: (ha-076992-m03) Ensuring network mk-ha-076992 is active
	I0919 19:26:50.377904   29946 main.go:141] libmachine: (ha-076992-m03) Getting domain xml...
	I0919 19:26:50.378666   29946 main.go:141] libmachine: (ha-076992-m03) Creating domain...
	I0919 19:26:51.611728   29946 main.go:141] libmachine: (ha-076992-m03) Waiting to get IP...
	I0919 19:26:51.612561   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:51.612946   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:51.612965   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:51.612926   30696 retry.go:31] will retry after 229.04121ms: waiting for machine to come up
	I0919 19:26:51.843282   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:51.843786   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:51.843820   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:51.843734   30696 retry.go:31] will retry after 364.805682ms: waiting for machine to come up
	I0919 19:26:52.210136   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:52.210584   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:52.210610   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:52.210546   30696 retry.go:31] will retry after 345.198613ms: waiting for machine to come up
	I0919 19:26:52.556935   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:52.557405   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:52.557428   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:52.557338   30696 retry.go:31] will retry after 457.195059ms: waiting for machine to come up
	I0919 19:26:53.015946   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:53.016403   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:53.016423   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:53.016360   30696 retry.go:31] will retry after 743.82706ms: waiting for machine to come up
	I0919 19:26:53.762468   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:53.762847   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:53.762870   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:53.762817   30696 retry.go:31] will retry after 795.902123ms: waiting for machine to come up
	I0919 19:26:54.560380   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:54.560862   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:54.560884   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:54.560818   30696 retry.go:31] will retry after 723.847816ms: waiting for machine to come up
	I0919 19:26:55.285997   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:55.286544   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:55.286569   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:55.286475   30696 retry.go:31] will retry after 1.372100892s: waiting for machine to come up
	I0919 19:26:56.660980   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:56.661391   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:56.661417   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:56.661373   30696 retry.go:31] will retry after 1.303463786s: waiting for machine to come up
	I0919 19:26:57.966063   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:57.966500   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:57.966528   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:57.966449   30696 retry.go:31] will retry after 1.418881121s: waiting for machine to come up
	I0919 19:26:59.387181   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:59.387696   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:59.387727   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:59.387636   30696 retry.go:31] will retry after 2.01324992s: waiting for machine to come up
	I0919 19:27:01.402316   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:01.402776   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:27:01.402804   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:27:01.402729   30696 retry.go:31] will retry after 3.126162565s: waiting for machine to come up
	I0919 19:27:04.533132   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:04.533523   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:27:04.533546   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:27:04.533483   30696 retry.go:31] will retry after 3.645979241s: waiting for machine to come up
	I0919 19:27:08.184963   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:08.185442   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:27:08.185465   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:27:08.185392   30696 retry.go:31] will retry after 4.695577454s: waiting for machine to come up
	I0919 19:27:12.882164   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.882571   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has current primary IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.882589   29946 main.go:141] libmachine: (ha-076992-m03) Found IP for machine: 192.168.39.66
	I0919 19:27:12.882601   29946 main.go:141] libmachine: (ha-076992-m03) Reserving static IP address...
	I0919 19:27:12.882993   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find host DHCP lease matching {name: "ha-076992-m03", mac: "52:54:00:6a:be:a6", ip: "192.168.39.66"} in network mk-ha-076992
	I0919 19:27:12.954002   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Getting to WaitForSSH function...
	I0919 19:27:12.954035   29946 main.go:141] libmachine: (ha-076992-m03) Reserved static IP address: 192.168.39.66
	I0919 19:27:12.954075   29946 main.go:141] libmachine: (ha-076992-m03) Waiting for SSH to be available...
	I0919 19:27:12.956412   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.956840   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:12.956865   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.957025   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Using SSH client type: external
	I0919 19:27:12.957056   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa (-rw-------)
	I0919 19:27:12.957197   29946 main.go:141] libmachine: (ha-076992-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 19:27:12.957216   29946 main.go:141] libmachine: (ha-076992-m03) DBG | About to run SSH command:
	I0919 19:27:12.957228   29946 main.go:141] libmachine: (ha-076992-m03) DBG | exit 0
	I0919 19:27:13.081333   29946 main.go:141] libmachine: (ha-076992-m03) DBG | SSH cmd err, output: <nil>: 
	I0919 19:27:13.081616   29946 main.go:141] libmachine: (ha-076992-m03) KVM machine creation complete!
	I0919 19:27:13.081958   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetConfigRaw
	I0919 19:27:13.082498   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:13.082685   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:13.082851   29946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 19:27:13.082866   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetState
	I0919 19:27:13.084230   29946 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 19:27:13.084246   29946 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 19:27:13.084253   29946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 19:27:13.084261   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.086332   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.086661   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.086683   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.086775   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.086955   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.087082   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.087204   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.087369   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.087586   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.087601   29946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 19:27:13.188711   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:27:13.188735   29946 main.go:141] libmachine: Detecting the provisioner...
	I0919 19:27:13.188748   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.191413   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.191717   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.191744   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.191916   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.192073   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.192197   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.192317   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.192502   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.192705   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.192716   29946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 19:27:13.293829   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0919 19:27:13.293892   29946 main.go:141] libmachine: found compatible host: buildroot
	I0919 19:27:13.293901   29946 main.go:141] libmachine: Provisioning with buildroot...
	I0919 19:27:13.293911   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:27:13.294179   29946 buildroot.go:166] provisioning hostname "ha-076992-m03"
	I0919 19:27:13.294206   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:27:13.294379   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.297332   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.297705   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.297731   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.297878   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.298121   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.298268   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.298407   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.298593   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.298797   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.298812   29946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992-m03 && echo "ha-076992-m03" | sudo tee /etc/hostname
	I0919 19:27:13.417925   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992-m03
	
	I0919 19:27:13.417953   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.421043   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.421515   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.421544   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.421759   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.421977   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.422158   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.422267   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.422417   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.422625   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.422650   29946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:27:13.534273   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:27:13.534305   29946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:27:13.534319   29946 buildroot.go:174] setting up certificates
	I0919 19:27:13.534328   29946 provision.go:84] configureAuth start
	I0919 19:27:13.534336   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:27:13.534593   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:13.536896   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.537258   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.537285   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.537378   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.539354   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.539732   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.539755   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.539949   29946 provision.go:143] copyHostCerts
	I0919 19:27:13.539973   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:27:13.540002   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:27:13.540010   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:27:13.540074   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:27:13.540169   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:27:13.540188   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:27:13.540192   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:27:13.540218   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:27:13.540272   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:27:13.540289   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:27:13.540295   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:27:13.540317   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:27:13.540366   29946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992-m03 san=[127.0.0.1 192.168.39.66 ha-076992-m03 localhost minikube]
	I0919 19:27:13.664258   29946 provision.go:177] copyRemoteCerts
	I0919 19:27:13.664317   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:27:13.664340   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.666694   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.666972   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.667004   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.667138   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.667349   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.667524   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.667655   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:13.747501   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:27:13.747575   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:27:13.775047   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:27:13.775117   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 19:27:13.799961   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:27:13.800042   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:27:13.824466   29946 provision.go:87] duration metric: took 290.126442ms to configureAuth
	I0919 19:27:13.824491   29946 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:27:13.824710   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:27:13.824790   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.827490   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.827892   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.827922   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.828063   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.828244   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.828410   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.828560   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.828704   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.828855   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.828868   29946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:27:14.055519   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:27:14.055549   29946 main.go:141] libmachine: Checking connection to Docker...
	I0919 19:27:14.055560   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetURL
	I0919 19:27:14.056949   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Using libvirt version 6000000
	I0919 19:27:14.059445   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.059710   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.059746   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.059910   29946 main.go:141] libmachine: Docker is up and running!
	I0919 19:27:14.059934   29946 main.go:141] libmachine: Reticulating splines...
	I0919 19:27:14.059941   29946 client.go:171] duration metric: took 24.019173404s to LocalClient.Create
	I0919 19:27:14.059965   29946 start.go:167] duration metric: took 24.019236466s to libmachine.API.Create "ha-076992"
	I0919 19:27:14.059975   29946 start.go:293] postStartSetup for "ha-076992-m03" (driver="kvm2")
	I0919 19:27:14.059989   29946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:27:14.060019   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.060324   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:27:14.060351   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:14.062476   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.062770   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.062797   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.062880   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.063087   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.063268   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.063425   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:14.148901   29946 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:27:14.153351   29946 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:27:14.153376   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:27:14.153447   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:27:14.153516   29946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:27:14.153525   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:27:14.153603   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:27:14.163847   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:27:14.190891   29946 start.go:296] duration metric: took 130.895498ms for postStartSetup
	I0919 19:27:14.190969   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetConfigRaw
	I0919 19:27:14.191591   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:14.194303   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.194676   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.194706   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.195041   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:27:14.195249   29946 start.go:128] duration metric: took 24.173788829s to createHost
	I0919 19:27:14.195296   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:14.197299   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.197596   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.197621   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.197722   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.197880   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.197999   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.198111   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.198242   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:14.198397   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:14.198407   29946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:27:14.302149   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726774034.280175121
	
	I0919 19:27:14.302173   29946 fix.go:216] guest clock: 1726774034.280175121
	I0919 19:27:14.302181   29946 fix.go:229] Guest: 2024-09-19 19:27:14.280175121 +0000 UTC Remote: 2024-09-19 19:27:14.195262057 +0000 UTC m=+143.681298720 (delta=84.913064ms)
	I0919 19:27:14.302206   29946 fix.go:200] guest clock delta is within tolerance: 84.913064ms
	I0919 19:27:14.302210   29946 start.go:83] releasing machines lock for "ha-076992-m03", held for 24.280882386s
	I0919 19:27:14.302236   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.302488   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:14.305506   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.305858   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.305888   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.308327   29946 out.go:177] * Found network options:
	I0919 19:27:14.309814   29946 out.go:177]   - NO_PROXY=192.168.39.173,192.168.39.232
	W0919 19:27:14.311323   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 19:27:14.311345   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:27:14.311387   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.311977   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.312171   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.312284   29946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:27:14.312326   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	W0919 19:27:14.312356   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 19:27:14.312379   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:27:14.312445   29946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:27:14.312467   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:14.315326   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315477   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315739   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.315765   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315795   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.315810   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315916   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.316063   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.316081   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.316266   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.316269   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.316443   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:14.316458   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.316594   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:14.552647   29946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:27:14.559427   29946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:27:14.559487   29946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:27:14.575890   29946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 19:27:14.575920   29946 start.go:495] detecting cgroup driver to use...
	I0919 19:27:14.575983   29946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:27:14.591936   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:27:14.606858   29946 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:27:14.606921   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:27:14.621450   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:27:14.635364   29946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:27:14.756131   29946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:27:14.907154   29946 docker.go:233] disabling docker service ...
	I0919 19:27:14.907243   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:27:14.923366   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:27:14.936588   29946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:27:15.078676   29946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:27:15.198104   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:27:15.212919   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:27:15.232314   29946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:27:15.232376   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.242884   29946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:27:15.242957   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.253165   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.263320   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.273801   29946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:27:15.284463   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.296688   29946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.314869   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.327156   29946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:27:15.338349   29946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 19:27:15.338412   29946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 19:27:15.353775   29946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:27:15.365059   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:27:15.499190   29946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:27:15.590064   29946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:27:15.590148   29946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:27:15.595200   29946 start.go:563] Will wait 60s for crictl version
	I0919 19:27:15.595269   29946 ssh_runner.go:195] Run: which crictl
	I0919 19:27:15.599029   29946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:27:15.640263   29946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:27:15.640356   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:27:15.670621   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:27:15.702613   29946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:27:15.703947   29946 out.go:177]   - env NO_PROXY=192.168.39.173
	I0919 19:27:15.705240   29946 out.go:177]   - env NO_PROXY=192.168.39.173,192.168.39.232
	I0919 19:27:15.706651   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:15.709234   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:15.709551   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:15.709578   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:15.709744   29946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:27:15.714032   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:27:15.727732   29946 mustload.go:65] Loading cluster: ha-076992
	I0919 19:27:15.727996   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:27:15.728332   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:27:15.728377   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:27:15.743011   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0919 19:27:15.743384   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:27:15.743811   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:27:15.743832   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:27:15.744550   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:27:15.744751   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:27:15.746453   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:27:15.746740   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:27:15.746776   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:27:15.761958   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0919 19:27:15.762454   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:27:15.762899   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:27:15.762916   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:27:15.763265   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:27:15.763475   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:27:15.763629   29946 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.66
	I0919 19:27:15.763640   29946 certs.go:194] generating shared ca certs ...
	I0919 19:27:15.763657   29946 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:27:15.763802   29946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:27:15.763861   29946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:27:15.763874   29946 certs.go:256] generating profile certs ...
	I0919 19:27:15.763968   29946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:27:15.764001   29946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430
	I0919 19:27:15.764017   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.232 192.168.39.66 192.168.39.254]
	I0919 19:27:15.897451   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430 ...
	I0919 19:27:15.897480   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430: {Name:mk8beb13cebda88770e8cb2f4d651fd5a45e954c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:27:15.897644   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430 ...
	I0919 19:27:15.897655   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430: {Name:mkcd8cc84233dc653483e6e6401ec1c9f04025cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:27:15.897721   29946 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:27:15.897848   29946 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:27:15.897973   29946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:27:15.897988   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:27:15.898003   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:27:15.898016   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:27:15.898028   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:27:15.898040   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:27:15.898054   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:27:15.898066   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:27:15.913133   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:27:15.913210   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:27:15.913259   29946 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:27:15.913269   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:27:15.913290   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:27:15.913314   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:27:15.913334   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:27:15.913371   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:27:15.913402   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:27:15.913413   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:15.913423   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:27:15.913453   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:27:15.916526   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:15.916928   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:27:15.916951   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:15.917154   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:27:15.917364   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:27:15.917522   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:27:15.917642   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:27:15.989416   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 19:27:15.994763   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 19:27:16.006209   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 19:27:16.010673   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 19:27:16.021439   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 19:27:16.026004   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 19:27:16.036773   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 19:27:16.041211   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 19:27:16.051440   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 19:27:16.055788   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 19:27:16.066035   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 19:27:16.071009   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 19:27:16.081291   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:27:16.106933   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:27:16.131578   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:27:16.154733   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:27:16.178142   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 19:27:16.203131   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 19:27:16.231577   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:27:16.258783   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:27:16.282643   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:27:16.307319   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:27:16.330802   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:27:16.354835   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 19:27:16.371768   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 19:27:16.387527   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 19:27:16.403635   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 19:27:16.419535   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 19:27:16.437605   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 19:27:16.453718   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 19:27:16.470564   29946 ssh_runner.go:195] Run: openssl version
	I0919 19:27:16.476297   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:27:16.486813   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:27:16.491276   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:27:16.491323   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:27:16.496992   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:27:16.507732   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:27:16.518539   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:16.523068   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:16.523123   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:16.528612   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:27:16.539667   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:27:16.550474   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:27:16.555341   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:27:16.555413   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:27:16.561228   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:27:16.572802   29946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:27:16.577025   29946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 19:27:16.577096   29946 kubeadm.go:934] updating node {m03 192.168.39.66 8443 v1.31.1 crio true true} ...
	I0919 19:27:16.577177   29946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:27:16.577201   29946 kube-vip.go:115] generating kube-vip config ...
	I0919 19:27:16.577231   29946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:27:16.595588   29946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:27:16.595653   29946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:27:16.595722   29946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:27:16.605668   29946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0919 19:27:16.605728   29946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0919 19:27:16.615281   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0919 19:27:16.615305   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:27:16.615306   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0919 19:27:16.615328   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:27:16.615349   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:27:16.615354   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0919 19:27:16.615388   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:27:16.615397   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:27:16.623586   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0919 19:27:16.623626   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0919 19:27:16.623772   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0919 19:27:16.623799   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0919 19:27:16.636164   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:27:16.636292   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:27:16.736519   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0919 19:27:16.736558   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0919 19:27:17.474932   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 19:27:17.484832   29946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 19:27:17.501777   29946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:27:17.518686   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:27:17.535414   29946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:27:17.539429   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:27:17.552345   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:27:17.687800   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:27:17.706912   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:27:17.707271   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:27:17.707332   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:27:17.723234   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46531
	I0919 19:27:17.723773   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:27:17.724317   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:27:17.724344   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:27:17.724711   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:27:17.724916   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:27:17.725046   29946 start.go:317] joinCluster: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:27:17.725198   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 19:27:17.725213   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:27:17.728260   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:17.728743   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:27:17.728764   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:17.728933   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:27:17.729087   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:27:17.729233   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:27:17.729362   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:27:17.893938   29946 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:27:17.893987   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nfzvhu.osmpbokubpd9m5ji --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m03 --control-plane --apiserver-advertise-address=192.168.39.66 --apiserver-bind-port=8443"
	I0919 19:27:40.045829   29946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nfzvhu.osmpbokubpd9m5ji --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m03 --control-plane --apiserver-advertise-address=192.168.39.66 --apiserver-bind-port=8443": (22.151818373s)
	I0919 19:27:40.045864   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 19:27:40.606802   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076992-m03 minikube.k8s.io/updated_at=2024_09_19T19_27_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=ha-076992 minikube.k8s.io/primary=false
	I0919 19:27:40.720562   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-076992-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 19:27:40.852305   29946 start.go:319] duration metric: took 23.127257351s to joinCluster
	I0919 19:27:40.852371   29946 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:27:40.852725   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:27:40.853772   29946 out.go:177] * Verifying Kubernetes components...
	I0919 19:27:40.855055   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:27:41.140593   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:27:41.167178   29946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:27:41.167526   29946 kapi.go:59] client config for ha-076992: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 19:27:41.167609   29946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.173:8443
	I0919 19:27:41.167883   29946 node_ready.go:35] waiting up to 6m0s for node "ha-076992-m03" to be "Ready" ...
	I0919 19:27:41.167964   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:41.167975   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:41.167986   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:41.167992   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:41.171312   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:41.668093   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:41.668122   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:41.668136   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:41.668145   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:41.671847   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:42.169049   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:42.169078   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:42.169089   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:42.169097   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:42.173253   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:42.668124   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:42.668154   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:42.668165   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:42.668172   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:42.671705   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:43.169071   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:43.169099   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:43.169111   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:43.169119   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:43.172988   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:43.173723   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:43.668069   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:43.668090   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:43.668098   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:43.668102   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:43.671379   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:44.168189   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:44.168213   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:44.168224   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:44.168232   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:44.172163   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:44.668238   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:44.668263   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:44.668292   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:44.668300   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:44.672297   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:45.168809   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:45.168914   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:45.168943   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:45.168952   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:45.172818   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:45.668795   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:45.668819   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:45.668829   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:45.668833   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:45.672833   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:45.673726   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:46.168145   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:46.168176   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:46.168188   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:46.168195   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:46.171541   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:46.669018   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:46.669043   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:46.669053   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:46.669058   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:46.672077   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:47.168070   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:47.168095   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:47.168106   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:47.168112   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:47.171091   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:47.668131   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:47.668156   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:47.668167   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:47.668173   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:47.671585   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:48.168035   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:48.168054   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:48.168066   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:48.168071   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:48.172365   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:48.172854   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:48.668232   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:48.668261   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:48.668269   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:48.668273   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:48.671672   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:49.168763   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:49.168784   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:49.168792   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:49.168796   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:49.172225   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:49.668291   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:49.668312   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:49.668319   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:49.668323   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:49.671622   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:50.168990   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:50.169014   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:50.169023   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:50.169028   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:50.172111   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:50.668480   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:50.668500   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:50.668508   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:50.668514   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:50.672693   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:50.673442   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:51.168845   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:51.168870   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:51.168883   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:51.168896   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:51.172225   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:51.668471   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:51.668494   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:51.668505   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:51.668510   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:51.672549   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:52.168467   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:52.168490   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:52.168499   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:52.168502   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:52.172284   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:52.668300   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:52.668325   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:52.668337   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:52.668345   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:52.671626   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:53.168043   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:53.168066   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:53.168076   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:53.168082   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:53.171507   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:53.172186   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:53.668508   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:53.668530   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:53.668539   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:53.668544   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:53.674065   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:54.169042   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:54.169081   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:54.169093   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:54.169101   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:54.172484   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:54.668693   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:54.668716   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:54.668724   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:54.668728   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:54.671712   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:55.168811   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:55.168838   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:55.168850   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:55.168856   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:55.171986   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:55.172564   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:55.669027   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:55.669049   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:55.669060   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:55.669116   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:55.674537   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:56.168644   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:56.168667   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:56.168674   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:56.168677   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:56.172061   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:56.669121   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:56.669152   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:56.669164   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:56.669170   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:56.672708   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:57.168818   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:57.168844   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:57.168856   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:57.168865   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:57.172258   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:57.172846   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:57.668135   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:57.668158   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:57.668169   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:57.668174   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:57.671424   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:58.168923   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:58.168945   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:58.168953   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:58.168956   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:58.172623   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:58.668685   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:58.668705   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:58.668713   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:58.668717   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:58.671912   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.168858   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:59.168880   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.168889   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.168892   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.171841   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.172469   29946 node_ready.go:49] node "ha-076992-m03" has status "Ready":"True"
	I0919 19:27:59.172488   29946 node_ready.go:38] duration metric: took 18.004586894s for node "ha-076992-m03" to be "Ready" ...
	I0919 19:27:59.172499   29946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:27:59.172582   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:27:59.172595   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.172604   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.172609   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.178464   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:59.185406   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.185497   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bst8x
	I0919 19:27:59.185507   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.185518   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.185526   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.188442   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.189103   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.189120   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.189130   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.189136   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.191329   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.191851   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.191866   29946 pod_ready.go:82] duration metric: took 6.439364ms for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.191873   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.191928   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nbds4
	I0919 19:27:59.191937   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.191944   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.191948   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.194394   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.195009   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.195025   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.195031   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.195035   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.197517   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.198256   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.198270   29946 pod_ready.go:82] duration metric: took 6.390833ms for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.198278   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.198317   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992
	I0919 19:27:59.198324   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.198331   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.198336   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.200499   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.201171   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.201184   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.201190   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.201201   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.203402   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.203953   29946 pod_ready.go:93] pod "etcd-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.203973   29946 pod_ready.go:82] duration metric: took 5.68948ms for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.203984   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.204042   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m02
	I0919 19:27:59.204053   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.204062   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.204073   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.206409   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.207206   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:27:59.207225   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.207234   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.207242   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.209682   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.210215   29946 pod_ready.go:93] pod "etcd-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.210231   29946 pod_ready.go:82] duration metric: took 6.235966ms for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.210241   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.369687   29946 request.go:632] Waited for 159.345593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m03
	I0919 19:27:59.369758   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m03
	I0919 19:27:59.369768   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.369776   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.369782   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.373326   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.569343   29946 request.go:632] Waited for 195.374141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:59.569427   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:59.569435   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.569444   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.569454   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.572773   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.573760   29946 pod_ready.go:93] pod "etcd-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.573784   29946 pod_ready.go:82] duration metric: took 363.534844ms for pod "etcd-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.573804   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.769848   29946 request.go:632] Waited for 195.964398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:27:59.769916   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:27:59.769924   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.769941   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.769951   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.773613   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.969692   29946 request.go:632] Waited for 195.271169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.969763   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.969771   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.969782   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.969790   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.975454   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:59.976399   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.976419   29946 pod_ready.go:82] duration metric: took 402.608428ms for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.976442   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.169862   29946 request.go:632] Waited for 193.313777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:28:00.169932   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:28:00.169948   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.169963   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.169971   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.173456   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.369679   29946 request.go:632] Waited for 195.364808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:00.369746   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:00.369757   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.369769   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.369777   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.373078   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.373725   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:00.373745   29946 pod_ready.go:82] duration metric: took 397.293364ms for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.373754   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.569238   29946 request.go:632] Waited for 195.416262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m03
	I0919 19:28:00.569304   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m03
	I0919 19:28:00.569310   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.569317   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.569325   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.572712   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.769839   29946 request.go:632] Waited for 196.213847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:00.769902   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:00.769909   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.769916   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.769925   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.773054   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.773595   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:00.773611   29946 pod_ready.go:82] duration metric: took 399.848276ms for pod "kube-apiserver-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.773623   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.969813   29946 request.go:632] Waited for 196.102797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:28:00.969866   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:28:00.969871   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.969878   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.969883   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.978905   29946 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0919 19:28:01.169966   29946 request.go:632] Waited for 190.375143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:01.170066   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:01.170080   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.170090   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.170095   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.173733   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.174395   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:01.174419   29946 pod_ready.go:82] duration metric: took 400.786244ms for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.174431   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.369465   29946 request.go:632] Waited for 194.942354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:28:01.369536   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:28:01.369546   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.369559   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.369570   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.373178   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.569830   29946 request.go:632] Waited for 195.884004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:01.569887   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:01.569894   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.569906   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.569911   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.573021   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.573575   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:01.573597   29946 pod_ready.go:82] duration metric: took 399.158503ms for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.573610   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.769720   29946 request.go:632] Waited for 196.039819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m03
	I0919 19:28:01.769796   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m03
	I0919 19:28:01.769804   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.769815   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.769863   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.773496   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.969679   29946 request.go:632] Waited for 195.366002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:01.969751   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:01.969759   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.969770   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.969778   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.973411   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.973966   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:01.973986   29946 pod_ready.go:82] duration metric: took 400.368344ms for pod "kube-controller-manager-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.973999   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.169159   29946 request.go:632] Waited for 195.067817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:28:02.169233   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:28:02.169240   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.169249   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.169255   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.172645   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:02.369743   29946 request.go:632] Waited for 196.39611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:02.369834   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:02.369848   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.369859   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.369869   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.372902   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:02.373658   29946 pod_ready.go:93] pod "kube-proxy-4d8dc" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:02.373679   29946 pod_ready.go:82] duration metric: took 399.671379ms for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.373695   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qxzr" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.569759   29946 request.go:632] Waited for 195.99907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qxzr
	I0919 19:28:02.569828   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qxzr
	I0919 19:28:02.569835   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.569845   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.569850   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.573245   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:02.769286   29946 request.go:632] Waited for 195.311639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:02.769401   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:02.769411   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.769421   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.769429   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.774902   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:28:02.775546   29946 pod_ready.go:93] pod "kube-proxy-4qxzr" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:02.775569   29946 pod_ready.go:82] duration metric: took 401.866343ms for pod "kube-proxy-4qxzr" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.775582   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.969688   29946 request.go:632] Waited for 194.028715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:28:02.969782   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:28:02.969793   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.969804   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.969814   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.973511   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.169667   29946 request.go:632] Waited for 195.362144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.169732   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.169740   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.169750   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.169759   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.173206   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.173751   29946 pod_ready.go:93] pod "kube-proxy-tjtfj" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:03.173769   29946 pod_ready.go:82] duration metric: took 398.180461ms for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.173777   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.369899   29946 request.go:632] Waited for 196.051119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:28:03.370000   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:28:03.370008   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.370019   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.370028   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.373045   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:28:03.569018   29946 request.go:632] Waited for 195.269584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:03.569098   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:03.569104   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.569111   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.569117   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.572980   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.573818   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:03.573842   29946 pod_ready.go:82] duration metric: took 400.056994ms for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.573856   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.768884   29946 request.go:632] Waited for 194.957925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:28:03.768975   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:28:03.768982   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.768989   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.768994   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.772280   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.969113   29946 request.go:632] Waited for 196.276201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.969173   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.969181   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.969192   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.969201   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.972689   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.973513   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:03.973536   29946 pod_ready.go:82] duration metric: took 399.670878ms for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.973550   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:04.169664   29946 request.go:632] Waited for 196.044338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m03
	I0919 19:28:04.169768   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m03
	I0919 19:28:04.169779   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.169790   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.169795   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.173604   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:04.369491   29946 request.go:632] Waited for 195.428121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:04.369586   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:04.369594   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.369605   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.369611   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.373358   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:04.373807   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:04.373827   29946 pod_ready.go:82] duration metric: took 400.269116ms for pod "kube-scheduler-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:04.373841   29946 pod_ready.go:39] duration metric: took 5.201326396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:28:04.373868   29946 api_server.go:52] waiting for apiserver process to appear ...
	I0919 19:28:04.373935   29946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:28:04.390528   29946 api_server.go:72] duration metric: took 23.538119441s to wait for apiserver process to appear ...
	I0919 19:28:04.390551   29946 api_server.go:88] waiting for apiserver healthz status ...
	I0919 19:28:04.390571   29946 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0919 19:28:04.396791   29946 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0919 19:28:04.396862   29946 round_trippers.go:463] GET https://192.168.39.173:8443/version
	I0919 19:28:04.396873   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.396882   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.396889   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.397946   29946 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 19:28:04.398142   29946 api_server.go:141] control plane version: v1.31.1
	I0919 19:28:04.398162   29946 api_server.go:131] duration metric: took 7.603365ms to wait for apiserver health ...
	I0919 19:28:04.398171   29946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 19:28:04.569591   29946 request.go:632] Waited for 171.340636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.569649   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.569654   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.569661   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.569665   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.575663   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:28:04.582592   29946 system_pods.go:59] 24 kube-system pods found
	I0919 19:28:04.582629   29946 system_pods.go:61] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:28:04.582636   29946 system_pods.go:61] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:28:04.582641   29946 system_pods.go:61] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:28:04.582646   29946 system_pods.go:61] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:28:04.582651   29946 system_pods.go:61] "etcd-ha-076992-m03" [2cb8094f-2857-49e8-a740-58c09de52bb5] Running
	I0919 19:28:04.582656   29946 system_pods.go:61] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:28:04.582660   29946 system_pods.go:61] "kindnet-89gmh" [696397d5-76c4-4565-9baa-042392bc74c8] Running
	I0919 19:28:04.582665   29946 system_pods.go:61] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:28:04.582670   29946 system_pods.go:61] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:28:04.582674   29946 system_pods.go:61] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:28:04.582679   29946 system_pods.go:61] "kube-apiserver-ha-076992-m03" [7ada8b62-958d-4bbf-9b60-4f2f8738e864] Running
	I0919 19:28:04.582685   29946 system_pods.go:61] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:28:04.582696   29946 system_pods.go:61] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:28:04.582705   29946 system_pods.go:61] "kube-controller-manager-ha-076992-m03" [b12ed136-a047-45cc-966f-fdbb624ee027] Running
	I0919 19:28:04.582710   29946 system_pods.go:61] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:28:04.582715   29946 system_pods.go:61] "kube-proxy-4qxzr" [91b8da75-fb68-4cfe-b463-5f4ce57a9fbc] Running
	I0919 19:28:04.582719   29946 system_pods.go:61] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:28:04.582722   29946 system_pods.go:61] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:28:04.582725   29946 system_pods.go:61] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:28:04.582729   29946 system_pods.go:61] "kube-scheduler-ha-076992-m03" [7b69ed21-49ee-47d0-add2-83b93f61b3cf] Running
	I0919 19:28:04.582732   29946 system_pods.go:61] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:28:04.582735   29946 system_pods.go:61] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:28:04.582738   29946 system_pods.go:61] "kube-vip-ha-076992-m03" [8e4ad9ad-38d3-4189-8ea9-16a7e8f87f08] Running
	I0919 19:28:04.582741   29946 system_pods.go:61] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:28:04.582746   29946 system_pods.go:74] duration metric: took 184.569532ms to wait for pod list to return data ...
	I0919 19:28:04.582762   29946 default_sa.go:34] waiting for default service account to be created ...
	I0919 19:28:04.769178   29946 request.go:632] Waited for 186.318811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:28:04.769251   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:28:04.769259   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.769269   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.769302   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.773568   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:28:04.773707   29946 default_sa.go:45] found service account: "default"
	I0919 19:28:04.773726   29946 default_sa.go:55] duration metric: took 190.956992ms for default service account to be created ...
	I0919 19:28:04.773736   29946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 19:28:04.968965   29946 request.go:632] Waited for 195.155154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.969039   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.969056   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.969099   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.969108   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.974937   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:28:04.983584   29946 system_pods.go:86] 24 kube-system pods found
	I0919 19:28:04.983617   29946 system_pods.go:89] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:28:04.983625   29946 system_pods.go:89] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:28:04.983629   29946 system_pods.go:89] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:28:04.983633   29946 system_pods.go:89] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:28:04.983637   29946 system_pods.go:89] "etcd-ha-076992-m03" [2cb8094f-2857-49e8-a740-58c09de52bb5] Running
	I0919 19:28:04.983641   29946 system_pods.go:89] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:28:04.983645   29946 system_pods.go:89] "kindnet-89gmh" [696397d5-76c4-4565-9baa-042392bc74c8] Running
	I0919 19:28:04.983648   29946 system_pods.go:89] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:28:04.983652   29946 system_pods.go:89] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:28:04.983656   29946 system_pods.go:89] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:28:04.983659   29946 system_pods.go:89] "kube-apiserver-ha-076992-m03" [7ada8b62-958d-4bbf-9b60-4f2f8738e864] Running
	I0919 19:28:04.983663   29946 system_pods.go:89] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:28:04.983667   29946 system_pods.go:89] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:28:04.983670   29946 system_pods.go:89] "kube-controller-manager-ha-076992-m03" [b12ed136-a047-45cc-966f-fdbb624ee027] Running
	I0919 19:28:04.983674   29946 system_pods.go:89] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:28:04.983677   29946 system_pods.go:89] "kube-proxy-4qxzr" [91b8da75-fb68-4cfe-b463-5f4ce57a9fbc] Running
	I0919 19:28:04.983680   29946 system_pods.go:89] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:28:04.983683   29946 system_pods.go:89] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:28:04.983687   29946 system_pods.go:89] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:28:04.983691   29946 system_pods.go:89] "kube-scheduler-ha-076992-m03" [7b69ed21-49ee-47d0-add2-83b93f61b3cf] Running
	I0919 19:28:04.983694   29946 system_pods.go:89] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:28:04.983697   29946 system_pods.go:89] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:28:04.983708   29946 system_pods.go:89] "kube-vip-ha-076992-m03" [8e4ad9ad-38d3-4189-8ea9-16a7e8f87f08] Running
	I0919 19:28:04.983714   29946 system_pods.go:89] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:28:04.983719   29946 system_pods.go:126] duration metric: took 209.976345ms to wait for k8s-apps to be running ...
	I0919 19:28:04.983728   29946 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 19:28:04.983768   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:28:05.000249   29946 system_svc.go:56] duration metric: took 16.508734ms WaitForService to wait for kubelet
	I0919 19:28:05.000280   29946 kubeadm.go:582] duration metric: took 24.147874151s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:28:05.000306   29946 node_conditions.go:102] verifying NodePressure condition ...
	I0919 19:28:05.168981   29946 request.go:632] Waited for 168.596869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes
	I0919 19:28:05.169036   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes
	I0919 19:28:05.169043   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:05.169052   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:05.169059   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:05.172968   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:05.174140   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:28:05.174163   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:28:05.174173   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:28:05.174177   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:28:05.174180   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:28:05.174183   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:28:05.174187   29946 node_conditions.go:105] duration metric: took 173.877315ms to run NodePressure ...
	I0919 19:28:05.174197   29946 start.go:241] waiting for startup goroutines ...
	I0919 19:28:05.174217   29946 start.go:255] writing updated cluster config ...
	I0919 19:28:05.174491   29946 ssh_runner.go:195] Run: rm -f paused
	I0919 19:28:05.224162   29946 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 19:28:05.226313   29946 out.go:177] * Done! kubectl is now configured to use "ha-076992" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.925395736Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-8wfb7,Uid:e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726774086457559625,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:28:06.143892361Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7964879c-5097-490e-b1ba-dd41091ca283,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1726773949959964614,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-19T19:25:49.629787866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-bst8x,Uid:165f4eae-fc28-4b50-b35f-f61f95d9872a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726773949949220893,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:49.628297100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbds4,Uid:89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1726773949939234068,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:49.620635006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&PodSandboxMetadata{Name:kube-proxy-4d8dc,Uid:4d522b18-9ae7-46a9-a6c7-e1560a1822de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726773937464283199,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-19T19:25:35.640844315Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&PodSandboxMetadata{Name:kindnet-j846w,Uid:cdccd08d-8a5d-4495-8ad3-5591de87862f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726773937453270806,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:35.645448663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-076992,Uid:3d5aa3049515e8c07c16189cb9b261d4,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1726773925049194129,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.173:8443,kubernetes.io/config.hash: 3d5aa3049515e8c07c16189cb9b261d4,kubernetes.io/config.seen: 2024-09-19T19:25:24.540182549Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-076992,Uid:b693200c7b44d836573bbd57560a83e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726773925034934891,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b693200c7b44d836573bbd57560a83e1,kubernetes.io/config.seen: 2024-09-19T19:25:24.540183657Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&PodSandboxMetadata{Name:etcd-ha-076992,Uid:79b7783d18d62d18697a4d1aa0ff5755,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726773925030413752,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.173:2379,kubernetes.io/config.hash: 79b7783d18d62d18697a4d1aa0ff5755,kubernetes.io/config.seen: 2024-09-19T19:25:24.540181273Z,kube
rnetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-076992,Uid:8d13805d19ec913a3d0f90382069839b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726773925029749181,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{kubernetes.io/config.hash: 8d13805d19ec913a3d0f90382069839b,kubernetes.io/config.seen: 2024-09-19T19:25:24.540180182Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-076992,Uid:c1c4b85bfdfb554afca940fe6375dba9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726773925018151522,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c1c4b85bfdfb554afca940fe6375dba9,kubernetes.io/config.seen: 2024-09-19T19:25:24.540176900Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=54aa1764-0d99-40bf-8b21-26933440d2ef name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.927410212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7df91933-5c76-4818-a83b-f710a6e360da name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.927491914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7df91933-5c76-4818-a83b-f710a6e360da name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.927756733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7df91933-5c76-4818-a83b-f710a6e360da name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.936143781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=376f7fbd-c768-4fbf-a08a-4a31a79cac76 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.936207894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=376f7fbd-c768-4fbf-a08a-4a31a79cac76 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.937164519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b863ffe-7338-4b2e-9d48-d80d718fa5ed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.937594559Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774309937573420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b863ffe-7338-4b2e-9d48-d80d718fa5ed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.938065821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb5cc3d9-14e6-4a4c-9fbf-ca8b620f70d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.938134552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb5cc3d9-14e6-4a4c-9fbf-ca8b620f70d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.938359991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb5cc3d9-14e6-4a4c-9fbf-ca8b620f70d3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.987283350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f908ad08-9d48-4bc0-b9cf-b5e2c280570e name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.987372410Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f908ad08-9d48-4bc0-b9cf-b5e2c280570e name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.988690937Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e061fb0a-43e4-4753-87b8-85ca73f90b48 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.989257650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774309989232521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e061fb0a-43e4-4753-87b8-85ca73f90b48 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.989903016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22b7bdbf-fdff-4a7c-a525-960858e547c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.989959946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22b7bdbf-fdff-4a7c-a525-960858e547c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:49 ha-076992 crio[661]: time="2024-09-19 19:31:49.990227713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22b7bdbf-fdff-4a7c-a525-960858e547c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:50 ha-076992 crio[661]: time="2024-09-19 19:31:50.029222765Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f8fb3c7-b2f6-4a61-ba83-05ab300e386b name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:50 ha-076992 crio[661]: time="2024-09-19 19:31:50.029296939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f8fb3c7-b2f6-4a61-ba83-05ab300e386b name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:50 ha-076992 crio[661]: time="2024-09-19 19:31:50.030490642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7ce9bf1-b8ac-4ebb-8d8e-8306fe4fb714 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:50 ha-076992 crio[661]: time="2024-09-19 19:31:50.030891632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774310030871000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7ce9bf1-b8ac-4ebb-8d8e-8306fe4fb714 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:50 ha-076992 crio[661]: time="2024-09-19 19:31:50.031714207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f85e1774-8881-4827-95e9-38aa5285d436 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:50 ha-076992 crio[661]: time="2024-09-19 19:31:50.031785027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f85e1774-8881-4827-95e9-38aa5285d436 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:50 ha-076992 crio[661]: time="2024-09-19 19:31:50.032069591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f85e1774-8881-4827-95e9-38aa5285d436 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52db63dad4c31       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a8aaf854df641       busybox-7dff88458-8wfb7
	17ef846dadbee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   8583d1eda759f       coredns-7c65d6cfc9-nbds4
	cbaa19f6b3857       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   d65bb54e4c426       coredns-7c65d6cfc9-bst8x
	6eb7d57489862       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   5d96139db90a8       storage-provisioner
	d623b5f012d8a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   0273544afdfa6       kindnet-j846w
	9d62ecb2cc70a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   2a6c6ac66a434       kube-proxy-4d8dc
	3132b4bb29e16       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   9f7ef19609750       kube-vip-ha-076992
	5745c8d186325       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   09b02f34308ad       kube-scheduler-ha-076992
	f7da5064b19f5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   9cebb02c5eed5       kube-apiserver-ha-076992
	3beffc038ef33       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   fc5737a4c0f5c       etcd-ha-076992
	5b605d500b3ee       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   6a8db8524df21       kube-controller-manager-ha-076992
	
	
	==> coredns [17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3] <==
	[INFO] 10.244.0.4:34108 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.006817779s
	[INFO] 10.244.0.4:40322 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013826742s
	[INFO] 10.244.1.2:55399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298188s
	[INFO] 10.244.1.2:35261 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000170423s
	[INFO] 10.244.2.2:57349 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000113863s
	[INFO] 10.244.2.2:35304 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000093782s
	[INFO] 10.244.0.4:60710 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175542s
	[INFO] 10.244.0.4:56638 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002407779s
	[INFO] 10.244.1.2:60721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148724s
	[INFO] 10.244.2.2:40070 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138971s
	[INFO] 10.244.2.2:53394 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186542s
	[INFO] 10.244.2.2:54178 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000225634s
	[INFO] 10.244.2.2:53480 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001438271s
	[INFO] 10.244.2.2:48475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168626s
	[INFO] 10.244.2.2:49380 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160453s
	[INFO] 10.244.2.2:38326 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100289s
	[INFO] 10.244.1.2:47564 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107018s
	[INFO] 10.244.0.4:55521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119496s
	[INFO] 10.244.0.4:51830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118694s
	[INFO] 10.244.0.4:49301 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000181413s
	[INFO] 10.244.1.2:38961 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124955s
	[INFO] 10.244.1.2:37060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092863s
	[INFO] 10.244.1.2:44024 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085892s
	[INFO] 10.244.2.2:35688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014156s
	[INFO] 10.244.2.2:33974 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170311s
	
	
	==> coredns [cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0] <==
	[INFO] 10.244.0.4:45775 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000206662s
	[INFO] 10.244.0.4:34019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123934s
	[INFO] 10.244.1.2:60797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218519s
	[INFO] 10.244.1.2:44944 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001794304s
	[INFO] 10.244.1.2:51111 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185225s
	[INFO] 10.244.1.2:46956 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160685s
	[INFO] 10.244.1.2:36318 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001321241s
	[INFO] 10.244.1.2:53158 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118134s
	[INFO] 10.244.1.2:45995 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102925s
	[INFO] 10.244.2.2:55599 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001757807s
	[INFO] 10.244.0.4:50520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118756s
	[INFO] 10.244.0.4:48294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189838s
	[INFO] 10.244.0.4:52710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005729s
	[INFO] 10.244.0.4:56525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085763s
	[INFO] 10.244.1.2:43917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168832s
	[INFO] 10.244.1.2:34972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200932s
	[INFO] 10.244.1.2:50680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181389s
	[INFO] 10.244.2.2:51430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152587s
	[INFO] 10.244.2.2:37924 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000317695s
	[INFO] 10.244.2.2:46377 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000371446s
	[INFO] 10.244.2.2:36790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012815s
	[INFO] 10.244.0.4:35196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000409388s
	[INFO] 10.244.1.2:43265 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235404s
	[INFO] 10.244.2.2:56515 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113892s
	[INFO] 10.244.2.2:33574 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251263s
	
	
	==> describe nodes <==
	Name:               ha-076992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T19_25_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:31:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    ha-076992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 88962b0779f84ff6915974a39d1a24ba
	  System UUID:                88962b07-79f8-4ff6-9159-74a39d1a24ba
	  Boot ID:                    f4736dd6-fd6e-4dc3-b2ee-64f8773325ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8wfb7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-bst8x             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 coredns-7c65d6cfc9-nbds4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m14s
	  kube-system                 etcd-ha-076992                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-j846w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m15s
	  kube-system                 kube-apiserver-ha-076992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-ha-076992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-proxy-4d8dc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-scheduler-ha-076992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-vip-ha-076992                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m12s  kube-proxy       
	  Normal  Starting                 6m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s  kubelet          Node ha-076992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s  kubelet          Node ha-076992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s  kubelet          Node ha-076992 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal  NodeReady                6m1s   kubelet          Node ha-076992 status is now: NodeReady
	  Normal  RegisteredNode           5m18s  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal  RegisteredNode           4m5s   node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	
	
	Name:               ha-076992-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_26_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:26:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:29:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-076992-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fbb92a6f6fa49d49b42ed70b015086d
	  System UUID:                7fbb92a6-f6fa-49d4-9b42-ed70b015086d
	  Boot ID:                    d99d8bb8-fed0-4ef9-95a0-7b5cb6b4a8e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c64rv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-076992-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m24s
	  kube-system                 kindnet-6d8pz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m26s
	  kube-system                 kube-apiserver-ha-076992-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-ha-076992-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-tjtfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-scheduler-ha-076992-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-vip-ha-076992-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m26s)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m26s)  kubelet          Node ha-076992-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m26s)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-076992-m02 status is now: NodeNotReady
	
	
	Name:               ha-076992-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_27_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:27:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:31:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.66
	  Hostname:    ha-076992-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0db72b5d16d8492b8f2f42e6cedd7691
	  System UUID:                0db72b5d-16d8-492b-8f2f-42e6cedd7691
	  Boot ID:                    a11e77a1-44c6-47d3-9894-1e2db25df61f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jl6lr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-076992-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m11s
	  kube-system                 kindnet-89gmh                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m13s
	  kube-system                 kube-apiserver-ha-076992-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-076992-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-4qxzr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-ha-076992-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-076992-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-076992-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-076992-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-076992-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	
	
	Name:               ha-076992-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_28_43_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:28:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:31:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:28:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:28:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:28:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:29:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-076992-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37704cd295b34d23a0864637f4482597
	  System UUID:                37704cd2-95b3-4d23-a086-4637f4482597
	  Boot ID:                    7afcea43-e30f-4573-9142-69832448eb86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8jqvd       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m8s
	  kube-system                 kube-proxy-8gt7w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)  kubelet          Node ha-076992-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-076992-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep19 19:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050539] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040218] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779433] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep19 19:25] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.560626] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.418534] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.061113] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050106] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.181483] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.133235] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.281192] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.948588] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.762419] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.059014] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.974334] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.083682] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344336] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.503085] kauditd_printk_skb: 38 callbacks suppressed
	[Sep19 19:26] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018] <==
	{"level":"warn","ts":"2024-09-19T19:31:50.257798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.290219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.299014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.303601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.313062Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.315628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.322791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.335938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.341222Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.345106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.351453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.359028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.360626Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.366878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.371640Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.375298Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.381295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.389392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.396752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.401751Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.405859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.411269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.417634Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.424954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:50.458363Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:31:50 up 6 min,  0 users,  load average: 0.20, 0.20, 0.10
	Linux ha-076992 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4] <==
	I0919 19:31:19.301654       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:31:29.299423       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:31:29.299534       1 main.go:299] handling current node
	I0919 19:31:29.299588       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:31:29.299608       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:31:29.299733       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:31:29.299753       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:31:29.299816       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:31:29.299834       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:31:39.295069       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:31:39.295797       1 main.go:299] handling current node
	I0919 19:31:39.295864       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:31:39.295880       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:31:39.296147       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:31:39.296174       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:31:39.296250       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:31:39.296272       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:31:49.295036       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:31:49.295191       1 main.go:299] handling current node
	I0919 19:31:49.295208       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:31:49.295213       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:31:49.295337       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:31:49.295366       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:31:49.295432       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:31:49.295459       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501] <==
	I0919 19:25:31.486188       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 19:25:31.506649       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 19:25:35.598891       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0919 19:25:35.750237       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0919 19:27:38.100207       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 13.658µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 19:27:38.100632       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 19:27:38.102611       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 19:27:38.103892       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 19:27:38.105160       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.382601ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0919 19:28:11.389256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45218: use of closed network connection
	E0919 19:28:11.576268       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45246: use of closed network connection
	E0919 19:28:11.773899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45258: use of closed network connection
	E0919 19:28:11.977200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45272: use of closed network connection
	E0919 19:28:12.158836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45298: use of closed network connection
	E0919 19:28:12.343311       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45308: use of closed network connection
	E0919 19:28:12.533653       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45320: use of closed network connection
	E0919 19:28:12.708696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45336: use of closed network connection
	E0919 19:28:12.880339       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45348: use of closed network connection
	E0919 19:28:13.172557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45378: use of closed network connection
	E0919 19:28:13.360524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45402: use of closed network connection
	E0919 19:28:13.537403       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45414: use of closed network connection
	E0919 19:28:13.726245       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45428: use of closed network connection
	E0919 19:28:13.903745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45458: use of closed network connection
	E0919 19:28:14.076234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45480: use of closed network connection
	W0919 19:29:39.951311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.173 192.168.39.66]
	
	
	==> kube-controller-manager [5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b] <==
	I0919 19:28:42.651135       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-076992-m04\" does not exist"
	I0919 19:28:42.696072       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-076992-m04" podCIDRs=["10.244.3.0/24"]
	I0919 19:28:42.696237       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:42.696385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:42.984651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:43.058418       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:43.437129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:44.991734       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-076992-m04"
	I0919 19:28:44.991858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:45.053922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:45.913734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:45.955524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:52.981964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:03.869117       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076992-m04"
	I0919 19:29:03.870215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:03.885512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:05.009111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:13.638377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:30:00.034775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:30:00.035207       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076992-m04"
	I0919 19:30:00.059561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:30:00.073804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.744937ms"
	I0919 19:30:00.073933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.501µs"
	I0919 19:30:00.989765       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:30:05.283636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	
	
	==> kube-proxy [9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 19:25:37.903821       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 19:25:37.932314       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.173"]
	E0919 19:25:37.932452       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:25:37.975043       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 19:25:37.975079       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 19:25:37.975107       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:25:37.978675       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:25:37.979280       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:25:37.979417       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:25:37.981041       1 config.go:199] "Starting service config controller"
	I0919 19:25:37.981519       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:25:37.981599       1 config.go:328] "Starting node config controller"
	I0919 19:25:37.981623       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:25:37.982405       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:25:37.982433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:25:38.081647       1 shared_informer.go:320] Caches are synced for service config
	I0919 19:25:38.081721       1 shared_informer.go:320] Caches are synced for node config
	I0919 19:25:38.082821       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1] <==
	W0919 19:25:29.292699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 19:25:29.292789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.292883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 19:25:29.292917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.315628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 19:25:29.315915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.317062       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 19:25:29.317708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.375676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 19:25:29.375771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.399790       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 19:25:29.399959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.458469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 19:25:29.458568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.500384       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 19:25:29.500442       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0919 19:25:32.657764       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 19:28:06.097590       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jl6lr\": pod busybox-7dff88458-jl6lr is already assigned to node \"ha-076992-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jl6lr" node="ha-076992-m03"
	E0919 19:28:06.098198       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3f7ee95d-11f9-4073-8fa9-d4aa5fc08d99(default/busybox-7dff88458-jl6lr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jl6lr"
	E0919 19:28:06.098359       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jl6lr\": pod busybox-7dff88458-jl6lr is already assigned to node \"ha-076992-m03\"" pod="default/busybox-7dff88458-jl6lr"
	I0919 19:28:06.098540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jl6lr" node="ha-076992-m03"
	E0919 19:28:06.176510       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8wfb7\": pod busybox-7dff88458-8wfb7 is already assigned to node \"ha-076992\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8wfb7" node="ha-076992"
	E0919 19:28:06.176725       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e9e5cd58-874f-41c6-8c0a-d37b5101a1f9(default/busybox-7dff88458-8wfb7) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8wfb7"
	E0919 19:28:06.181327       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8wfb7\": pod busybox-7dff88458-8wfb7 is already assigned to node \"ha-076992\"" pod="default/busybox-7dff88458-8wfb7"
	I0919 19:28:06.181857       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8wfb7" node="ha-076992"
	
	
	==> kubelet <==
	Sep 19 19:30:31 ha-076992 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 19:30:31 ha-076992 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 19:30:31 ha-076992 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 19:30:31 ha-076992 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 19:30:31 ha-076992 kubelet[1304]: E0919 19:30:31.509860    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774231509247618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:31 ha-076992 kubelet[1304]: E0919 19:30:31.509926    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774231509247618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:41 ha-076992 kubelet[1304]: E0919 19:30:41.515125    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774241513934130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:41 ha-076992 kubelet[1304]: E0919 19:30:41.515489    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774241513934130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:51 ha-076992 kubelet[1304]: E0919 19:30:51.516656    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774251516247410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:51 ha-076992 kubelet[1304]: E0919 19:30:51.516759    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774251516247410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:01 ha-076992 kubelet[1304]: E0919 19:31:01.520748    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774261520199169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:01 ha-076992 kubelet[1304]: E0919 19:31:01.520803    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774261520199169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:11 ha-076992 kubelet[1304]: E0919 19:31:11.523342    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774271522952876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:11 ha-076992 kubelet[1304]: E0919 19:31:11.523611    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774271522952876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:21 ha-076992 kubelet[1304]: E0919 19:31:21.527464    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774281526662586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:21 ha-076992 kubelet[1304]: E0919 19:31:21.527558    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774281526662586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:31 ha-076992 kubelet[1304]: E0919 19:31:31.406408    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 19 19:31:31 ha-076992 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 19:31:31 ha-076992 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 19:31:31 ha-076992 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 19:31:31 ha-076992 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 19:31:31 ha-076992 kubelet[1304]: E0919 19:31:31.535893    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774291534622152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:31 ha-076992 kubelet[1304]: E0919 19:31:31.535937    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774291534622152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:41 ha-076992 kubelet[1304]: E0919 19:31:41.537584    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774301537350727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:41 ha-076992 kubelet[1304]: E0919 19:31:41.537608    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774301537350727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-076992 -n ha-076992
helpers_test.go:261: (dbg) Run:  kubectl --context ha-076992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr: (4.19066317s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-076992 -n ha-076992
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-076992 logs -n 25: (1.402051435s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992:/home/docker/cp-test_ha-076992-m03_ha-076992.txt                       |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992 sudo cat                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992.txt                                 |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m02:/home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m04 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp testdata/cp-test.txt                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992:/home/docker/cp-test_ha-076992-m04_ha-076992.txt                       |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992 sudo cat                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992.txt                                 |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m02:/home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03:/home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m03 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-076992 node stop m02 -v=7                                                     | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-076992 node start m02 -v=7                                                    | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 19:24:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 19:24:50.546945   29946 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:24:50.547063   29946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:50.547072   29946 out.go:358] Setting ErrFile to fd 2...
	I0919 19:24:50.547076   29946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:50.547225   29946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:24:50.547763   29946 out.go:352] Setting JSON to false
	I0919 19:24:50.548588   29946 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4035,"bootTime":1726769856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:24:50.548689   29946 start.go:139] virtualization: kvm guest
	I0919 19:24:50.550911   29946 out.go:177] * [ha-076992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:24:50.552265   29946 notify.go:220] Checking for updates...
	I0919 19:24:50.552285   29946 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:24:50.553819   29946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:24:50.555250   29946 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:24:50.556710   29946 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:50.557978   29946 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:24:50.559199   29946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:24:50.560718   29946 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:24:50.593907   29946 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 19:24:50.595154   29946 start.go:297] selected driver: kvm2
	I0919 19:24:50.595169   29946 start.go:901] validating driver "kvm2" against <nil>
	I0919 19:24:50.595180   29946 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:24:50.595817   29946 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:24:50.595876   29946 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 19:24:50.610266   29946 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 19:24:50.610336   29946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 19:24:50.610614   29946 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:24:50.610657   29946 cni.go:84] Creating CNI manager for ""
	I0919 19:24:50.610702   29946 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 19:24:50.610710   29946 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 19:24:50.610777   29946 start.go:340] cluster config:
	{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0919 19:24:50.610877   29946 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:24:50.612616   29946 out.go:177] * Starting "ha-076992" primary control-plane node in "ha-076992" cluster
	I0919 19:24:50.613886   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:24:50.613919   29946 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 19:24:50.613930   29946 cache.go:56] Caching tarball of preloaded images
	I0919 19:24:50.614002   29946 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:24:50.614013   29946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:24:50.614333   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:24:50.614355   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json: {Name:mk8d4afdb9fa7e7321b4f997efa478fa6418ce40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:24:50.614511   29946 start.go:360] acquireMachinesLock for ha-076992: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:24:50.614545   29946 start.go:364] duration metric: took 19.183µs to acquireMachinesLock for "ha-076992"
	I0919 19:24:50.614566   29946 start.go:93] Provisioning new machine with config: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:24:50.614666   29946 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 19:24:50.616202   29946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 19:24:50.616319   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:24:50.616360   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:24:50.630334   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39147
	I0919 19:24:50.630824   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:24:50.631360   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:24:50.631387   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:24:50.631735   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:24:50.631911   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:24:50.632045   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:24:50.632261   29946 start.go:159] libmachine.API.Create for "ha-076992" (driver="kvm2")
	I0919 19:24:50.632292   29946 client.go:168] LocalClient.Create starting
	I0919 19:24:50.632325   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem
	I0919 19:24:50.632369   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:24:50.632396   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:24:50.632469   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem
	I0919 19:24:50.632497   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:24:50.632517   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:24:50.632546   29946 main.go:141] libmachine: Running pre-create checks...
	I0919 19:24:50.632558   29946 main.go:141] libmachine: (ha-076992) Calling .PreCreateCheck
	I0919 19:24:50.632876   29946 main.go:141] libmachine: (ha-076992) Calling .GetConfigRaw
	I0919 19:24:50.633289   29946 main.go:141] libmachine: Creating machine...
	I0919 19:24:50.633304   29946 main.go:141] libmachine: (ha-076992) Calling .Create
	I0919 19:24:50.633442   29946 main.go:141] libmachine: (ha-076992) Creating KVM machine...
	I0919 19:24:50.634573   29946 main.go:141] libmachine: (ha-076992) DBG | found existing default KVM network
	I0919 19:24:50.635280   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:50.635109   29969 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0919 19:24:50.635311   29946 main.go:141] libmachine: (ha-076992) DBG | created network xml: 
	I0919 19:24:50.635327   29946 main.go:141] libmachine: (ha-076992) DBG | <network>
	I0919 19:24:50.635345   29946 main.go:141] libmachine: (ha-076992) DBG |   <name>mk-ha-076992</name>
	I0919 19:24:50.635359   29946 main.go:141] libmachine: (ha-076992) DBG |   <dns enable='no'/>
	I0919 19:24:50.635371   29946 main.go:141] libmachine: (ha-076992) DBG |   
	I0919 19:24:50.635380   29946 main.go:141] libmachine: (ha-076992) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0919 19:24:50.635421   29946 main.go:141] libmachine: (ha-076992) DBG |     <dhcp>
	I0919 19:24:50.635435   29946 main.go:141] libmachine: (ha-076992) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0919 19:24:50.635458   29946 main.go:141] libmachine: (ha-076992) DBG |     </dhcp>
	I0919 19:24:50.635488   29946 main.go:141] libmachine: (ha-076992) DBG |   </ip>
	I0919 19:24:50.635501   29946 main.go:141] libmachine: (ha-076992) DBG |   
	I0919 19:24:50.635515   29946 main.go:141] libmachine: (ha-076992) DBG | </network>
	I0919 19:24:50.635528   29946 main.go:141] libmachine: (ha-076992) DBG | 
	I0919 19:24:50.640246   29946 main.go:141] libmachine: (ha-076992) DBG | trying to create private KVM network mk-ha-076992 192.168.39.0/24...
	I0919 19:24:50.704681   29946 main.go:141] libmachine: (ha-076992) DBG | private KVM network mk-ha-076992 192.168.39.0/24 created
	I0919 19:24:50.704725   29946 main.go:141] libmachine: (ha-076992) Setting up store path in /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992 ...
	I0919 19:24:50.704741   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:50.704651   29969 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:50.704763   29946 main.go:141] libmachine: (ha-076992) Building disk image from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 19:24:50.704783   29946 main.go:141] libmachine: (ha-076992) Downloading /home/jenkins/minikube-integration/19664-7917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0919 19:24:50.947095   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:50.946892   29969 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa...
	I0919 19:24:51.013606   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:51.013482   29969 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/ha-076992.rawdisk...
	I0919 19:24:51.013627   29946 main.go:141] libmachine: (ha-076992) DBG | Writing magic tar header
	I0919 19:24:51.013637   29946 main.go:141] libmachine: (ha-076992) DBG | Writing SSH key tar header
	I0919 19:24:51.013650   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:51.013598   29969 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992 ...
	I0919 19:24:51.013757   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992
	I0919 19:24:51.013788   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992 (perms=drwx------)
	I0919 19:24:51.013802   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines
	I0919 19:24:51.013816   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:51.013823   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917
	I0919 19:24:51.013833   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines (perms=drwxr-xr-x)
	I0919 19:24:51.013844   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 19:24:51.013855   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube (perms=drwxr-xr-x)
	I0919 19:24:51.013870   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917 (perms=drwxrwxr-x)
	I0919 19:24:51.013881   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 19:24:51.013890   29946 main.go:141] libmachine: (ha-076992) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 19:24:51.013899   29946 main.go:141] libmachine: (ha-076992) Creating domain...
	I0919 19:24:51.013908   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home/jenkins
	I0919 19:24:51.013915   29946 main.go:141] libmachine: (ha-076992) DBG | Checking permissions on dir: /home
	I0919 19:24:51.013924   29946 main.go:141] libmachine: (ha-076992) DBG | Skipping /home - not owner
	I0919 19:24:51.014892   29946 main.go:141] libmachine: (ha-076992) define libvirt domain using xml: 
	I0919 19:24:51.014904   29946 main.go:141] libmachine: (ha-076992) <domain type='kvm'>
	I0919 19:24:51.014910   29946 main.go:141] libmachine: (ha-076992)   <name>ha-076992</name>
	I0919 19:24:51.014944   29946 main.go:141] libmachine: (ha-076992)   <memory unit='MiB'>2200</memory>
	I0919 19:24:51.014958   29946 main.go:141] libmachine: (ha-076992)   <vcpu>2</vcpu>
	I0919 19:24:51.014968   29946 main.go:141] libmachine: (ha-076992)   <features>
	I0919 19:24:51.014975   29946 main.go:141] libmachine: (ha-076992)     <acpi/>
	I0919 19:24:51.014982   29946 main.go:141] libmachine: (ha-076992)     <apic/>
	I0919 19:24:51.015012   29946 main.go:141] libmachine: (ha-076992)     <pae/>
	I0919 19:24:51.015033   29946 main.go:141] libmachine: (ha-076992)     
	I0919 19:24:51.015043   29946 main.go:141] libmachine: (ha-076992)   </features>
	I0919 19:24:51.015052   29946 main.go:141] libmachine: (ha-076992)   <cpu mode='host-passthrough'>
	I0919 19:24:51.015061   29946 main.go:141] libmachine: (ha-076992)   
	I0919 19:24:51.015070   29946 main.go:141] libmachine: (ha-076992)   </cpu>
	I0919 19:24:51.015078   29946 main.go:141] libmachine: (ha-076992)   <os>
	I0919 19:24:51.015088   29946 main.go:141] libmachine: (ha-076992)     <type>hvm</type>
	I0919 19:24:51.015098   29946 main.go:141] libmachine: (ha-076992)     <boot dev='cdrom'/>
	I0919 19:24:51.015117   29946 main.go:141] libmachine: (ha-076992)     <boot dev='hd'/>
	I0919 19:24:51.015130   29946 main.go:141] libmachine: (ha-076992)     <bootmenu enable='no'/>
	I0919 19:24:51.015139   29946 main.go:141] libmachine: (ha-076992)   </os>
	I0919 19:24:51.015171   29946 main.go:141] libmachine: (ha-076992)   <devices>
	I0919 19:24:51.015199   29946 main.go:141] libmachine: (ha-076992)     <disk type='file' device='cdrom'>
	I0919 19:24:51.015212   29946 main.go:141] libmachine: (ha-076992)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/boot2docker.iso'/>
	I0919 19:24:51.015227   29946 main.go:141] libmachine: (ha-076992)       <target dev='hdc' bus='scsi'/>
	I0919 19:24:51.015247   29946 main.go:141] libmachine: (ha-076992)       <readonly/>
	I0919 19:24:51.015259   29946 main.go:141] libmachine: (ha-076992)     </disk>
	I0919 19:24:51.015272   29946 main.go:141] libmachine: (ha-076992)     <disk type='file' device='disk'>
	I0919 19:24:51.015287   29946 main.go:141] libmachine: (ha-076992)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 19:24:51.015303   29946 main.go:141] libmachine: (ha-076992)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/ha-076992.rawdisk'/>
	I0919 19:24:51.015314   29946 main.go:141] libmachine: (ha-076992)       <target dev='hda' bus='virtio'/>
	I0919 19:24:51.015325   29946 main.go:141] libmachine: (ha-076992)     </disk>
	I0919 19:24:51.015334   29946 main.go:141] libmachine: (ha-076992)     <interface type='network'>
	I0919 19:24:51.015347   29946 main.go:141] libmachine: (ha-076992)       <source network='mk-ha-076992'/>
	I0919 19:24:51.015371   29946 main.go:141] libmachine: (ha-076992)       <model type='virtio'/>
	I0919 19:24:51.015382   29946 main.go:141] libmachine: (ha-076992)     </interface>
	I0919 19:24:51.015392   29946 main.go:141] libmachine: (ha-076992)     <interface type='network'>
	I0919 19:24:51.015402   29946 main.go:141] libmachine: (ha-076992)       <source network='default'/>
	I0919 19:24:51.015412   29946 main.go:141] libmachine: (ha-076992)       <model type='virtio'/>
	I0919 19:24:51.015420   29946 main.go:141] libmachine: (ha-076992)     </interface>
	I0919 19:24:51.015432   29946 main.go:141] libmachine: (ha-076992)     <serial type='pty'>
	I0919 19:24:51.015443   29946 main.go:141] libmachine: (ha-076992)       <target port='0'/>
	I0919 19:24:51.015451   29946 main.go:141] libmachine: (ha-076992)     </serial>
	I0919 19:24:51.015462   29946 main.go:141] libmachine: (ha-076992)     <console type='pty'>
	I0919 19:24:51.015471   29946 main.go:141] libmachine: (ha-076992)       <target type='serial' port='0'/>
	I0919 19:24:51.015502   29946 main.go:141] libmachine: (ha-076992)     </console>
	I0919 19:24:51.015516   29946 main.go:141] libmachine: (ha-076992)     <rng model='virtio'>
	I0919 19:24:51.015528   29946 main.go:141] libmachine: (ha-076992)       <backend model='random'>/dev/random</backend>
	I0919 19:24:51.015538   29946 main.go:141] libmachine: (ha-076992)     </rng>
	I0919 19:24:51.015546   29946 main.go:141] libmachine: (ha-076992)     
	I0919 19:24:51.015554   29946 main.go:141] libmachine: (ha-076992)     
	I0919 19:24:51.015563   29946 main.go:141] libmachine: (ha-076992)   </devices>
	I0919 19:24:51.015571   29946 main.go:141] libmachine: (ha-076992) </domain>
	I0919 19:24:51.015594   29946 main.go:141] libmachine: (ha-076992) 
	I0919 19:24:51.019925   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:db:cf:56 in network default
	I0919 19:24:51.020474   29946 main.go:141] libmachine: (ha-076992) Ensuring networks are active...
	I0919 19:24:51.020498   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:51.021112   29946 main.go:141] libmachine: (ha-076992) Ensuring network default is active
	I0919 19:24:51.021403   29946 main.go:141] libmachine: (ha-076992) Ensuring network mk-ha-076992 is active
	I0919 19:24:51.021908   29946 main.go:141] libmachine: (ha-076992) Getting domain xml...
	I0919 19:24:51.022590   29946 main.go:141] libmachine: (ha-076992) Creating domain...
	I0919 19:24:52.199008   29946 main.go:141] libmachine: (ha-076992) Waiting to get IP...
	I0919 19:24:52.199822   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:52.200184   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:52.200222   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:52.200179   29969 retry.go:31] will retry after 305.917546ms: waiting for machine to come up
	I0919 19:24:52.507816   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:52.508347   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:52.508367   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:52.508306   29969 retry.go:31] will retry after 257.743777ms: waiting for machine to come up
	I0919 19:24:52.767675   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:52.768093   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:52.768147   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:52.768045   29969 retry.go:31] will retry after 451.176186ms: waiting for machine to come up
	I0919 19:24:53.220690   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:53.221075   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:53.221127   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:53.221017   29969 retry.go:31] will retry after 532.893204ms: waiting for machine to come up
	I0919 19:24:53.755758   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:53.756124   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:53.756151   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:53.756077   29969 retry.go:31] will retry after 735.36183ms: waiting for machine to come up
	I0919 19:24:54.492954   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:54.493288   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:54.493311   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:54.493234   29969 retry.go:31] will retry after 820.552907ms: waiting for machine to come up
	I0919 19:24:55.315112   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:55.315416   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:55.315452   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:55.315388   29969 retry.go:31] will retry after 1.159630492s: waiting for machine to come up
	I0919 19:24:56.476212   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:56.476585   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:56.476603   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:56.476554   29969 retry.go:31] will retry after 1.27132767s: waiting for machine to come up
	I0919 19:24:57.749988   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:57.750422   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:57.750445   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:57.750374   29969 retry.go:31] will retry after 1.45971409s: waiting for machine to come up
	I0919 19:24:59.211323   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:24:59.211646   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:24:59.211667   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:24:59.211594   29969 retry.go:31] will retry after 1.806599967s: waiting for machine to come up
	I0919 19:25:01.019773   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:01.020204   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:01.020230   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:01.020169   29969 retry.go:31] will retry after 1.98521469s: waiting for machine to come up
	I0919 19:25:03.008256   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:03.008710   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:03.008731   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:03.008667   29969 retry.go:31] will retry after 3.161929877s: waiting for machine to come up
	I0919 19:25:06.172436   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:06.172851   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:06.172870   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:06.172810   29969 retry.go:31] will retry after 3.065142974s: waiting for machine to come up
	I0919 19:25:09.242150   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:09.242595   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find current IP address of domain ha-076992 in network mk-ha-076992
	I0919 19:25:09.242618   29946 main.go:141] libmachine: (ha-076992) DBG | I0919 19:25:09.242551   29969 retry.go:31] will retry after 4.628547568s: waiting for machine to come up
	I0919 19:25:13.875203   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.875628   29946 main.go:141] libmachine: (ha-076992) Found IP for machine: 192.168.39.173
	I0919 19:25:13.875655   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has current primary IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.875661   29946 main.go:141] libmachine: (ha-076992) Reserving static IP address...
	I0919 19:25:13.876020   29946 main.go:141] libmachine: (ha-076992) DBG | unable to find host DHCP lease matching {name: "ha-076992", mac: "52:54:00:7d:f5:95", ip: "192.168.39.173"} in network mk-ha-076992
	I0919 19:25:13.945252   29946 main.go:141] libmachine: (ha-076992) DBG | Getting to WaitForSSH function...
	I0919 19:25:13.945280   29946 main.go:141] libmachine: (ha-076992) Reserved static IP address: 192.168.39.173
	I0919 19:25:13.945289   29946 main.go:141] libmachine: (ha-076992) Waiting for SSH to be available...
	I0919 19:25:13.947766   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.948158   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:13.948194   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:13.948312   29946 main.go:141] libmachine: (ha-076992) DBG | Using SSH client type: external
	I0919 19:25:13.948335   29946 main.go:141] libmachine: (ha-076992) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa (-rw-------)
	I0919 19:25:13.948378   29946 main.go:141] libmachine: (ha-076992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 19:25:13.948385   29946 main.go:141] libmachine: (ha-076992) DBG | About to run SSH command:
	I0919 19:25:13.948400   29946 main.go:141] libmachine: (ha-076992) DBG | exit 0
	I0919 19:25:14.069031   29946 main.go:141] libmachine: (ha-076992) DBG | SSH cmd err, output: <nil>: 
	I0919 19:25:14.069310   29946 main.go:141] libmachine: (ha-076992) KVM machine creation complete!
	I0919 19:25:14.069628   29946 main.go:141] libmachine: (ha-076992) Calling .GetConfigRaw
	I0919 19:25:14.070250   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:14.070406   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:14.070540   29946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 19:25:14.070554   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:14.072128   29946 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 19:25:14.072140   29946 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 19:25:14.072145   29946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 19:25:14.072151   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.074112   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.074425   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.074456   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.074626   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.074770   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.074885   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.074971   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.075077   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.075278   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.075290   29946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 19:25:14.176659   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:14.176688   29946 main.go:141] libmachine: Detecting the provisioner...
	I0919 19:25:14.176697   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.179372   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.179694   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.179715   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.179850   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.180053   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.180210   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.180361   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.180525   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.180682   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.180691   29946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 19:25:14.282081   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0919 19:25:14.282192   29946 main.go:141] libmachine: found compatible host: buildroot
	I0919 19:25:14.282206   29946 main.go:141] libmachine: Provisioning with buildroot...
	I0919 19:25:14.282215   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:25:14.282509   29946 buildroot.go:166] provisioning hostname "ha-076992"
	I0919 19:25:14.282531   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:25:14.282795   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.286540   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.286900   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.286924   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.287087   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.287264   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.287404   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.287528   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.287657   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.287847   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.287862   29946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992 && echo "ha-076992" | sudo tee /etc/hostname
	I0919 19:25:14.405366   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:25:14.405398   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.408109   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.408451   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.408503   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.408709   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.408884   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.409027   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.409148   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.409275   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.409515   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.409532   29946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:25:14.518352   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:14.518409   29946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:25:14.518432   29946 buildroot.go:174] setting up certificates
	I0919 19:25:14.518441   29946 provision.go:84] configureAuth start
	I0919 19:25:14.518450   29946 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:25:14.518683   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:14.520859   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.521176   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.521197   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.521352   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.523136   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.523477   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.523502   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.523620   29946 provision.go:143] copyHostCerts
	I0919 19:25:14.523651   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:14.523697   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:25:14.523707   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:14.523782   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:25:14.523897   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:14.523925   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:25:14.523934   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:14.523976   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:25:14.524055   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:14.524076   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:25:14.524085   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:14.524119   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:25:14.524203   29946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992 san=[127.0.0.1 192.168.39.173 ha-076992 localhost minikube]
	I0919 19:25:14.665666   29946 provision.go:177] copyRemoteCerts
	I0919 19:25:14.665718   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:25:14.665740   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.668329   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.668676   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.668708   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.668855   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.669012   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.669229   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.669429   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:14.751236   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:25:14.751315   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:25:14.776009   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:25:14.776073   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 19:25:14.800333   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:25:14.800401   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:25:14.824393   29946 provision.go:87] duration metric: took 305.938756ms to configureAuth
	I0919 19:25:14.824421   29946 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:25:14.824627   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:14.824707   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:14.827604   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.827968   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:14.827993   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:14.828193   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:14.828404   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.828556   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:14.828663   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:14.828790   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:14.829402   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:14.829444   29946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:25:15.045474   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:25:15.045502   29946 main.go:141] libmachine: Checking connection to Docker...
	I0919 19:25:15.045510   29946 main.go:141] libmachine: (ha-076992) Calling .GetURL
	I0919 19:25:15.046752   29946 main.go:141] libmachine: (ha-076992) DBG | Using libvirt version 6000000
	I0919 19:25:15.048660   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.049036   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.049059   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.049264   29946 main.go:141] libmachine: Docker is up and running!
	I0919 19:25:15.049278   29946 main.go:141] libmachine: Reticulating splines...
	I0919 19:25:15.049284   29946 client.go:171] duration metric: took 24.416985175s to LocalClient.Create
	I0919 19:25:15.049305   29946 start.go:167] duration metric: took 24.417044575s to libmachine.API.Create "ha-076992"
	I0919 19:25:15.049317   29946 start.go:293] postStartSetup for "ha-076992" (driver="kvm2")
	I0919 19:25:15.049330   29946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:25:15.049346   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.049548   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:25:15.049567   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.051882   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.052218   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.052245   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.052457   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.052636   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.052818   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.052959   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:15.135380   29946 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:25:15.139841   29946 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:25:15.139871   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:25:15.139953   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:25:15.140035   29946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:25:15.140047   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:25:15.140142   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:25:15.149803   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:25:15.173954   29946 start.go:296] duration metric: took 124.6206ms for postStartSetup
	I0919 19:25:15.174015   29946 main.go:141] libmachine: (ha-076992) Calling .GetConfigRaw
	I0919 19:25:15.174578   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:15.176983   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.177379   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.177404   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.177609   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:15.177797   29946 start.go:128] duration metric: took 24.563118372s to createHost
	I0919 19:25:15.177822   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.179973   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.180294   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.180319   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.180465   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.180655   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.180790   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.180976   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.181181   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:15.181358   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:25:15.181374   29946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:25:15.282086   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726773915.259292374
	
	I0919 19:25:15.282107   29946 fix.go:216] guest clock: 1726773915.259292374
	I0919 19:25:15.282114   29946 fix.go:229] Guest: 2024-09-19 19:25:15.259292374 +0000 UTC Remote: 2024-09-19 19:25:15.177809817 +0000 UTC m=+24.663846475 (delta=81.482557ms)
	I0919 19:25:15.282172   29946 fix.go:200] guest clock delta is within tolerance: 81.482557ms
	I0919 19:25:15.282183   29946 start.go:83] releasing machines lock for "ha-076992", held for 24.66762655s
	I0919 19:25:15.282207   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.282416   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:15.285015   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.285310   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.285332   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.285551   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.285982   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.286151   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:15.286236   29946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:25:15.286279   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.286315   29946 ssh_runner.go:195] Run: cat /version.json
	I0919 19:25:15.286338   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:15.288664   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.288927   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.288997   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.289024   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.289155   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.289279   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:15.289305   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:15.289315   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.289547   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:15.289548   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.289752   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:15.289745   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:15.289876   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:15.289970   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:15.362421   29946 ssh_runner.go:195] Run: systemctl --version
	I0919 19:25:15.387771   29946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:25:15.544684   29946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:25:15.550599   29946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:25:15.550653   29946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:25:15.566463   29946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 19:25:15.566486   29946 start.go:495] detecting cgroup driver to use...
	I0919 19:25:15.566538   29946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:25:15.582773   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:25:15.596900   29946 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:25:15.596957   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:25:15.610508   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:25:15.624376   29946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:25:15.733813   29946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:25:15.878726   29946 docker.go:233] disabling docker service ...
	I0919 19:25:15.878810   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:25:15.892801   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:25:15.905716   29946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:25:16.030572   29946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:25:16.160731   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:25:16.174416   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:25:16.192761   29946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:25:16.192830   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.203609   29946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:25:16.203677   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.214426   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.225032   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.235752   29946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:25:16.247045   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.258205   29946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.275682   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:25:16.286480   29946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:25:16.296369   29946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 19:25:16.296429   29946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 19:25:16.310714   29946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:25:16.321030   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:25:16.442591   29946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:25:16.537253   29946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:25:16.537333   29946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:25:16.542338   29946 start.go:563] Will wait 60s for crictl version
	I0919 19:25:16.542399   29946 ssh_runner.go:195] Run: which crictl
	I0919 19:25:16.546294   29946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:25:16.588011   29946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:25:16.588101   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:25:16.616308   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:25:16.647185   29946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:25:16.648600   29946 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:25:16.651059   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:16.651358   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:16.651387   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:16.651601   29946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:25:16.655720   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:25:16.669431   29946 kubeadm.go:883] updating cluster {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 19:25:16.669533   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:25:16.669573   29946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:25:16.706546   29946 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0919 19:25:16.706605   29946 ssh_runner.go:195] Run: which lz4
	I0919 19:25:16.710770   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0919 19:25:16.710856   29946 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 19:25:16.715145   29946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 19:25:16.715174   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0919 19:25:18.046106   29946 crio.go:462] duration metric: took 1.335269784s to copy over tarball
	I0919 19:25:18.046183   29946 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 19:25:20.022215   29946 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.975997168s)
	I0919 19:25:20.022248   29946 crio.go:469] duration metric: took 1.976118647s to extract the tarball
	I0919 19:25:20.022255   29946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 19:25:20.059151   29946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:25:20.102732   29946 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:25:20.102759   29946 cache_images.go:84] Images are preloaded, skipping loading
	I0919 19:25:20.102769   29946 kubeadm.go:934] updating node { 192.168.39.173 8443 v1.31.1 crio true true} ...
	I0919 19:25:20.102901   29946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:25:20.102991   29946 ssh_runner.go:195] Run: crio config
	I0919 19:25:20.149091   29946 cni.go:84] Creating CNI manager for ""
	I0919 19:25:20.149117   29946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 19:25:20.149129   29946 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 19:25:20.149151   29946 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076992 NodeName:ha-076992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 19:25:20.149390   29946 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 19:25:20.149434   29946 kube-vip.go:115] generating kube-vip config ...
	I0919 19:25:20.149487   29946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:25:20.167402   29946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:25:20.167516   29946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:25:20.167589   29946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:25:20.177872   29946 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 19:25:20.177945   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 19:25:20.187340   29946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0919 19:25:20.203708   29946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:25:20.219797   29946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0919 19:25:20.236038   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0919 19:25:20.251815   29946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:25:20.255527   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:25:20.267874   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:25:20.389268   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:25:20.406525   29946 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.173
	I0919 19:25:20.406544   29946 certs.go:194] generating shared ca certs ...
	I0919 19:25:20.406562   29946 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.406708   29946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:25:20.406775   29946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:25:20.406789   29946 certs.go:256] generating profile certs ...
	I0919 19:25:20.406855   29946 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:25:20.406880   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt with IP's: []
	I0919 19:25:20.508433   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt ...
	I0919 19:25:20.508466   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt: {Name:mkfa51b5957d9c0689bd29c9d7ac67976197d1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.508644   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key ...
	I0919 19:25:20.508659   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key: {Name:mke8583745dcb3fd2e449775522b103cfe463401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.508755   29946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77
	I0919 19:25:20.508774   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.254]
	I0919 19:25:20.790439   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77 ...
	I0919 19:25:20.790476   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77: {Name:mk129f473c8ca2bf9c282104464393dd4c0e2ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.790661   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77 ...
	I0919 19:25:20.790678   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77: {Name:mk3e710a4268d5f56461b3aadb1485c362a2d2c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.790775   29946 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.2f119a77 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:25:20.790887   29946 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.2f119a77 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:25:20.790975   29946 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:25:20.790995   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt with IP's: []
	I0919 19:25:20.971771   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt ...
	I0919 19:25:20.971802   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt: {Name:mk0aab9d02f395e9da9c35e7e8f603cb6b5cdfc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.971977   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key ...
	I0919 19:25:20.971992   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key: {Name:mke99ffbb66c5a7dba2706f1581886421c464464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:20.972083   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:25:20.972116   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:25:20.972133   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:25:20.972152   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:25:20.972170   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:25:20.972189   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:25:20.972210   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:25:20.972227   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:25:20.972297   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:25:20.972349   29946 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:25:20.972361   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:25:20.972459   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:25:20.972537   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:25:20.972573   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:25:20.972635   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:25:20.972677   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:20.972699   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:25:20.972718   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:25:20.973287   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:25:20.998208   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:25:21.020664   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:25:21.043465   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:25:21.065487   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 19:25:21.087887   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 19:25:21.110693   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:25:21.134315   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:25:21.159427   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:25:21.209793   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:25:21.234146   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:25:21.256777   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 19:25:21.273318   29946 ssh_runner.go:195] Run: openssl version
	I0919 19:25:21.279164   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:25:21.290077   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:25:21.294953   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:25:21.295015   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:25:21.301042   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:25:21.311548   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:25:21.322467   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:25:21.326955   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:25:21.327033   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:25:21.332698   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:25:21.343007   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:25:21.353411   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:21.357905   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:21.357956   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:25:21.363494   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:25:21.373947   29946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:25:21.378011   29946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 19:25:21.378067   29946 kubeadm.go:392] StartCluster: {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:25:21.378145   29946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 19:25:21.378195   29946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 19:25:21.414470   29946 cri.go:89] found id: ""
	I0919 19:25:21.414537   29946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 19:25:21.424173   29946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 19:25:21.433474   29946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 19:25:21.442569   29946 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 19:25:21.442585   29946 kubeadm.go:157] found existing configuration files:
	
	I0919 19:25:21.442641   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 19:25:21.456054   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 19:25:21.456094   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 19:25:21.465434   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 19:25:21.474456   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 19:25:21.474516   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 19:25:21.483588   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 19:25:21.492486   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 19:25:21.492535   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 19:25:21.501852   29946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 19:25:21.510898   29946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 19:25:21.510940   29946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 19:25:21.520189   29946 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 19:25:21.636110   29946 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 19:25:21.636193   29946 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 19:25:21.741569   29946 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 19:25:21.741692   29946 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 19:25:21.741840   29946 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 19:25:21.751361   29946 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 19:25:21.850204   29946 out.go:235]   - Generating certificates and keys ...
	I0919 19:25:21.850323   29946 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 19:25:21.850411   29946 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 19:25:22.052364   29946 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 19:25:22.111035   29946 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 19:25:22.319537   29946 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 19:25:22.387119   29946 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 19:25:22.515422   29946 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 19:25:22.515564   29946 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-076992 localhost] and IPs [192.168.39.173 127.0.0.1 ::1]
	I0919 19:25:22.770343   29946 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 19:25:22.770549   29946 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-076992 localhost] and IPs [192.168.39.173 127.0.0.1 ::1]
	I0919 19:25:22.940962   29946 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 19:25:23.141337   29946 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 19:25:23.227103   29946 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 19:25:23.227182   29946 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 19:25:23.339999   29946 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 19:25:23.488595   29946 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 19:25:23.642974   29946 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 19:25:23.798144   29946 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 19:25:24.008881   29946 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 19:25:24.009486   29946 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 19:25:24.014369   29946 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 19:25:24.145863   29946 out.go:235]   - Booting up control plane ...
	I0919 19:25:24.146000   29946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 19:25:24.146123   29946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 19:25:24.146222   29946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 19:25:24.146351   29946 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 19:25:24.146497   29946 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 19:25:24.146584   29946 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 19:25:24.164755   29946 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 19:25:24.164947   29946 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 19:25:24.666140   29946 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.684085ms
	I0919 19:25:24.666245   29946 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 19:25:30.661904   29946 kubeadm.go:310] [api-check] The API server is healthy after 5.999328933s
	I0919 19:25:30.674821   29946 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 19:25:30.694689   29946 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 19:25:30.728456   29946 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 19:25:30.728705   29946 kubeadm.go:310] [mark-control-plane] Marking the node ha-076992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 19:25:30.742484   29946 kubeadm.go:310] [bootstrap-token] Using token: 9riz07.p2i93yajbhhfpock
	I0919 19:25:30.744002   29946 out.go:235]   - Configuring RBAC rules ...
	I0919 19:25:30.744156   29946 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 19:25:30.749173   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 19:25:30.770991   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 19:25:30.778177   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 19:25:30.786779   29946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 19:25:30.790121   29946 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 19:25:31.069223   29946 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 19:25:31.498557   29946 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 19:25:32.068354   29946 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 19:25:32.068406   29946 kubeadm.go:310] 
	I0919 19:25:32.068512   29946 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 19:25:32.068526   29946 kubeadm.go:310] 
	I0919 19:25:32.068652   29946 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 19:25:32.068663   29946 kubeadm.go:310] 
	I0919 19:25:32.068714   29946 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 19:25:32.068809   29946 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 19:25:32.068885   29946 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 19:25:32.068895   29946 kubeadm.go:310] 
	I0919 19:25:32.068999   29946 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 19:25:32.069019   29946 kubeadm.go:310] 
	I0919 19:25:32.069122   29946 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 19:25:32.069135   29946 kubeadm.go:310] 
	I0919 19:25:32.069210   29946 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 19:25:32.069312   29946 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 19:25:32.069415   29946 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 19:25:32.069425   29946 kubeadm.go:310] 
	I0919 19:25:32.069540   29946 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 19:25:32.069660   29946 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 19:25:32.069677   29946 kubeadm.go:310] 
	I0919 19:25:32.069794   29946 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9riz07.p2i93yajbhhfpock \
	I0919 19:25:32.069948   29946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 \
	I0919 19:25:32.069992   29946 kubeadm.go:310] 	--control-plane 
	I0919 19:25:32.070002   29946 kubeadm.go:310] 
	I0919 19:25:32.070125   29946 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 19:25:32.070153   29946 kubeadm.go:310] 
	I0919 19:25:32.070277   29946 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9riz07.p2i93yajbhhfpock \
	I0919 19:25:32.070418   29946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 
	I0919 19:25:32.071077   29946 kubeadm.go:310] W0919 19:25:21.617150     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 19:25:32.071492   29946 kubeadm.go:310] W0919 19:25:21.618100     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 19:25:32.071645   29946 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 19:25:32.071673   29946 cni.go:84] Creating CNI manager for ""
	I0919 19:25:32.071683   29946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 19:25:32.073578   29946 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 19:25:32.075092   29946 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 19:25:32.080797   29946 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0919 19:25:32.080815   29946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 19:25:32.099353   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 19:25:32.484244   29946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 19:25:32.484317   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:32.484356   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076992 minikube.k8s.io/updated_at=2024_09_19T19_25_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=ha-076992 minikube.k8s.io/primary=true
	I0919 19:25:32.699563   29946 ops.go:34] apiserver oom_adj: -16
	I0919 19:25:32.700092   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:33.200174   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:33.700760   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:34.200308   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:34.700609   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:35.200998   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:35.700578   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 19:25:35.798072   29946 kubeadm.go:1113] duration metric: took 3.313794341s to wait for elevateKubeSystemPrivileges
	I0919 19:25:35.798118   29946 kubeadm.go:394] duration metric: took 14.420052871s to StartCluster
	I0919 19:25:35.798147   29946 settings.go:142] acquiring lock: {Name:mk58f627f177d13dd5c0d47e681e886cab43cce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:35.798243   29946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:25:35.799184   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/kubeconfig: {Name:mk632e082e805bb0ee3f336087f78588814f24af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:25:35.799451   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 19:25:35.799465   29946 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:25:35.799491   29946 start.go:241] waiting for startup goroutines ...
	I0919 19:25:35.799511   29946 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 19:25:35.799597   29946 addons.go:69] Setting storage-provisioner=true in profile "ha-076992"
	I0919 19:25:35.799613   29946 addons.go:234] Setting addon storage-provisioner=true in "ha-076992"
	I0919 19:25:35.799618   29946 addons.go:69] Setting default-storageclass=true in profile "ha-076992"
	I0919 19:25:35.799636   29946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-076992"
	I0919 19:25:35.799646   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:25:35.799697   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:35.800027   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.800066   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.800097   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.800144   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.815590   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0919 19:25:35.815605   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I0919 19:25:35.816049   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.816088   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.816567   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.816586   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.816689   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.816710   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.816987   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.817114   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.817220   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:35.817668   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.817714   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.819378   29946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:25:35.819715   29946 kapi.go:59] client config for ha-076992: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 19:25:35.820225   29946 cert_rotation.go:140] Starting client certificate rotation controller
	I0919 19:25:35.820487   29946 addons.go:234] Setting addon default-storageclass=true in "ha-076992"
	I0919 19:25:35.820530   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:25:35.820906   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.820951   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.833309   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0919 19:25:35.833766   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.834301   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.834327   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.834689   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.834900   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:35.835942   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0919 19:25:35.836351   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.836799   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.836819   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.837143   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:35.837207   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.837734   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:35.837784   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:35.839005   29946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 19:25:35.840904   29946 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 19:25:35.840925   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 19:25:35.840944   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:35.844561   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.845133   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:35.845270   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.845469   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:35.845677   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:35.845845   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:35.845998   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:35.854128   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0919 19:25:35.854570   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:35.855071   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:35.855094   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:35.855375   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:35.855571   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:25:35.857281   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:25:35.857490   29946 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 19:25:35.857507   29946 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 19:25:35.857525   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:25:35.860312   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.860745   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:25:35.860772   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:25:35.860889   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:25:35.861048   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:25:35.861242   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:25:35.861376   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:25:35.927743   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 19:25:36.004938   29946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 19:25:36.013596   29946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 19:25:36.335279   29946 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 19:25:36.504465   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504493   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.504491   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504508   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.504762   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.504781   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.504790   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504802   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.504875   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.504890   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.504900   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.504904   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.504916   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.505030   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.505034   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.505041   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.505114   29946 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 19:25:36.505136   29946 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 19:25:36.505210   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.505215   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.505222   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.505242   29946 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0919 19:25:36.505249   29946 round_trippers.go:469] Request Headers:
	I0919 19:25:36.505260   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:25:36.505265   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:25:36.515769   29946 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0919 19:25:36.516537   29946 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0919 19:25:36.516554   29946 round_trippers.go:469] Request Headers:
	I0919 19:25:36.516565   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:25:36.516572   29946 round_trippers.go:473]     Content-Type: application/json
	I0919 19:25:36.516581   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:25:36.519463   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:25:36.519632   29946 main.go:141] libmachine: Making call to close driver server
	I0919 19:25:36.519650   29946 main.go:141] libmachine: (ha-076992) Calling .Close
	I0919 19:25:36.519937   29946 main.go:141] libmachine: (ha-076992) DBG | Closing plugin on server side
	I0919 19:25:36.519949   29946 main.go:141] libmachine: Successfully made call to close driver server
	I0919 19:25:36.519960   29946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 19:25:36.522604   29946 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0919 19:25:36.523991   29946 addons.go:510] duration metric: took 724.482922ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 19:25:36.524039   29946 start.go:246] waiting for cluster config update ...
	I0919 19:25:36.524053   29946 start.go:255] writing updated cluster config ...
	I0919 19:25:36.525729   29946 out.go:201] 
	I0919 19:25:36.527177   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:36.527269   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:36.528940   29946 out.go:177] * Starting "ha-076992-m02" control-plane node in "ha-076992" cluster
	I0919 19:25:36.530205   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:25:36.530230   29946 cache.go:56] Caching tarball of preloaded images
	I0919 19:25:36.530345   29946 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:25:36.530360   29946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:25:36.530451   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:36.530647   29946 start.go:360] acquireMachinesLock for ha-076992-m02: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:25:36.530701   29946 start.go:364] duration metric: took 30.765µs to acquireMachinesLock for "ha-076992-m02"
	I0919 19:25:36.530723   29946 start.go:93] Provisioning new machine with config: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:25:36.530820   29946 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0919 19:25:36.532606   29946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 19:25:36.532678   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:25:36.532710   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:25:36.547137   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0919 19:25:36.547545   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:25:36.547997   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:25:36.548015   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:25:36.548367   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:25:36.548567   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:36.548746   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:36.548944   29946 start.go:159] libmachine.API.Create for "ha-076992" (driver="kvm2")
	I0919 19:25:36.548973   29946 client.go:168] LocalClient.Create starting
	I0919 19:25:36.549008   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem
	I0919 19:25:36.549050   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:25:36.549087   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:25:36.549192   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem
	I0919 19:25:36.549240   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:25:36.549257   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:25:36.549297   29946 main.go:141] libmachine: Running pre-create checks...
	I0919 19:25:36.549316   29946 main.go:141] libmachine: (ha-076992-m02) Calling .PreCreateCheck
	I0919 19:25:36.549515   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetConfigRaw
	I0919 19:25:36.549909   29946 main.go:141] libmachine: Creating machine...
	I0919 19:25:36.549924   29946 main.go:141] libmachine: (ha-076992-m02) Calling .Create
	I0919 19:25:36.550052   29946 main.go:141] libmachine: (ha-076992-m02) Creating KVM machine...
	I0919 19:25:36.551192   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found existing default KVM network
	I0919 19:25:36.551300   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found existing private KVM network mk-ha-076992
	I0919 19:25:36.551429   29946 main.go:141] libmachine: (ha-076992-m02) Setting up store path in /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02 ...
	I0919 19:25:36.551455   29946 main.go:141] libmachine: (ha-076992-m02) Building disk image from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 19:25:36.551523   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.551412   30305 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:25:36.551615   29946 main.go:141] libmachine: (ha-076992-m02) Downloading /home/jenkins/minikube-integration/19664-7917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0919 19:25:36.777277   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.777143   30305 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa...
	I0919 19:25:36.934632   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.934510   30305 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/ha-076992-m02.rawdisk...
	I0919 19:25:36.934655   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Writing magic tar header
	I0919 19:25:36.934666   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Writing SSH key tar header
	I0919 19:25:36.934677   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:36.934643   30305 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02 ...
	I0919 19:25:36.934732   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02
	I0919 19:25:36.934753   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines
	I0919 19:25:36.934762   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02 (perms=drwx------)
	I0919 19:25:36.934775   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:25:36.934789   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917
	I0919 19:25:36.934801   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 19:25:36.934811   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home/jenkins
	I0919 19:25:36.934821   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines (perms=drwxr-xr-x)
	I0919 19:25:36.934826   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Checking permissions on dir: /home
	I0919 19:25:36.934834   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Skipping /home - not owner
	I0919 19:25:36.934842   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube (perms=drwxr-xr-x)
	I0919 19:25:36.934852   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917 (perms=drwxrwxr-x)
	I0919 19:25:36.934866   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 19:25:36.934884   29946 main.go:141] libmachine: (ha-076992-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 19:25:36.934911   29946 main.go:141] libmachine: (ha-076992-m02) Creating domain...
	I0919 19:25:36.935720   29946 main.go:141] libmachine: (ha-076992-m02) define libvirt domain using xml: 
	I0919 19:25:36.935740   29946 main.go:141] libmachine: (ha-076992-m02) <domain type='kvm'>
	I0919 19:25:36.935750   29946 main.go:141] libmachine: (ha-076992-m02)   <name>ha-076992-m02</name>
	I0919 19:25:36.935757   29946 main.go:141] libmachine: (ha-076992-m02)   <memory unit='MiB'>2200</memory>
	I0919 19:25:36.935765   29946 main.go:141] libmachine: (ha-076992-m02)   <vcpu>2</vcpu>
	I0919 19:25:36.935775   29946 main.go:141] libmachine: (ha-076992-m02)   <features>
	I0919 19:25:36.935783   29946 main.go:141] libmachine: (ha-076992-m02)     <acpi/>
	I0919 19:25:36.935792   29946 main.go:141] libmachine: (ha-076992-m02)     <apic/>
	I0919 19:25:36.935799   29946 main.go:141] libmachine: (ha-076992-m02)     <pae/>
	I0919 19:25:36.935808   29946 main.go:141] libmachine: (ha-076992-m02)     
	I0919 19:25:36.935823   29946 main.go:141] libmachine: (ha-076992-m02)   </features>
	I0919 19:25:36.935834   29946 main.go:141] libmachine: (ha-076992-m02)   <cpu mode='host-passthrough'>
	I0919 19:25:36.935839   29946 main.go:141] libmachine: (ha-076992-m02)   
	I0919 19:25:36.935844   29946 main.go:141] libmachine: (ha-076992-m02)   </cpu>
	I0919 19:25:36.935849   29946 main.go:141] libmachine: (ha-076992-m02)   <os>
	I0919 19:25:36.935856   29946 main.go:141] libmachine: (ha-076992-m02)     <type>hvm</type>
	I0919 19:25:36.935861   29946 main.go:141] libmachine: (ha-076992-m02)     <boot dev='cdrom'/>
	I0919 19:25:36.935865   29946 main.go:141] libmachine: (ha-076992-m02)     <boot dev='hd'/>
	I0919 19:25:36.935876   29946 main.go:141] libmachine: (ha-076992-m02)     <bootmenu enable='no'/>
	I0919 19:25:36.935883   29946 main.go:141] libmachine: (ha-076992-m02)   </os>
	I0919 19:25:36.935888   29946 main.go:141] libmachine: (ha-076992-m02)   <devices>
	I0919 19:25:36.935893   29946 main.go:141] libmachine: (ha-076992-m02)     <disk type='file' device='cdrom'>
	I0919 19:25:36.935901   29946 main.go:141] libmachine: (ha-076992-m02)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/boot2docker.iso'/>
	I0919 19:25:36.935911   29946 main.go:141] libmachine: (ha-076992-m02)       <target dev='hdc' bus='scsi'/>
	I0919 19:25:36.935916   29946 main.go:141] libmachine: (ha-076992-m02)       <readonly/>
	I0919 19:25:36.935923   29946 main.go:141] libmachine: (ha-076992-m02)     </disk>
	I0919 19:25:36.935931   29946 main.go:141] libmachine: (ha-076992-m02)     <disk type='file' device='disk'>
	I0919 19:25:36.935939   29946 main.go:141] libmachine: (ha-076992-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 19:25:36.935946   29946 main.go:141] libmachine: (ha-076992-m02)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/ha-076992-m02.rawdisk'/>
	I0919 19:25:36.935951   29946 main.go:141] libmachine: (ha-076992-m02)       <target dev='hda' bus='virtio'/>
	I0919 19:25:36.935958   29946 main.go:141] libmachine: (ha-076992-m02)     </disk>
	I0919 19:25:36.935962   29946 main.go:141] libmachine: (ha-076992-m02)     <interface type='network'>
	I0919 19:25:36.935970   29946 main.go:141] libmachine: (ha-076992-m02)       <source network='mk-ha-076992'/>
	I0919 19:25:36.935974   29946 main.go:141] libmachine: (ha-076992-m02)       <model type='virtio'/>
	I0919 19:25:36.935980   29946 main.go:141] libmachine: (ha-076992-m02)     </interface>
	I0919 19:25:36.935987   29946 main.go:141] libmachine: (ha-076992-m02)     <interface type='network'>
	I0919 19:25:36.935994   29946 main.go:141] libmachine: (ha-076992-m02)       <source network='default'/>
	I0919 19:25:36.935999   29946 main.go:141] libmachine: (ha-076992-m02)       <model type='virtio'/>
	I0919 19:25:36.936006   29946 main.go:141] libmachine: (ha-076992-m02)     </interface>
	I0919 19:25:36.936010   29946 main.go:141] libmachine: (ha-076992-m02)     <serial type='pty'>
	I0919 19:25:36.936015   29946 main.go:141] libmachine: (ha-076992-m02)       <target port='0'/>
	I0919 19:25:36.936021   29946 main.go:141] libmachine: (ha-076992-m02)     </serial>
	I0919 19:25:36.936026   29946 main.go:141] libmachine: (ha-076992-m02)     <console type='pty'>
	I0919 19:25:36.936033   29946 main.go:141] libmachine: (ha-076992-m02)       <target type='serial' port='0'/>
	I0919 19:25:36.936037   29946 main.go:141] libmachine: (ha-076992-m02)     </console>
	I0919 19:25:36.936041   29946 main.go:141] libmachine: (ha-076992-m02)     <rng model='virtio'>
	I0919 19:25:36.936048   29946 main.go:141] libmachine: (ha-076992-m02)       <backend model='random'>/dev/random</backend>
	I0919 19:25:36.936052   29946 main.go:141] libmachine: (ha-076992-m02)     </rng>
	I0919 19:25:36.936057   29946 main.go:141] libmachine: (ha-076992-m02)     
	I0919 19:25:36.936065   29946 main.go:141] libmachine: (ha-076992-m02)     
	I0919 19:25:36.936070   29946 main.go:141] libmachine: (ha-076992-m02)   </devices>
	I0919 19:25:36.936080   29946 main.go:141] libmachine: (ha-076992-m02) </domain>
	I0919 19:25:36.936086   29946 main.go:141] libmachine: (ha-076992-m02) 
	I0919 19:25:36.942900   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:0e:87:b8 in network default
	I0919 19:25:36.943479   29946 main.go:141] libmachine: (ha-076992-m02) Ensuring networks are active...
	I0919 19:25:36.943509   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:36.944120   29946 main.go:141] libmachine: (ha-076992-m02) Ensuring network default is active
	I0919 19:25:36.944391   29946 main.go:141] libmachine: (ha-076992-m02) Ensuring network mk-ha-076992 is active
	I0919 19:25:36.944707   29946 main.go:141] libmachine: (ha-076992-m02) Getting domain xml...
	I0919 19:25:36.945497   29946 main.go:141] libmachine: (ha-076992-m02) Creating domain...
	I0919 19:25:38.180680   29946 main.go:141] libmachine: (ha-076992-m02) Waiting to get IP...
	I0919 19:25:38.181469   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:38.181903   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:38.181932   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:38.181877   30305 retry.go:31] will retry after 244.203763ms: waiting for machine to come up
	I0919 19:25:38.427374   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:38.427795   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:38.427822   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:38.427757   30305 retry.go:31] will retry after 281.507755ms: waiting for machine to come up
	I0919 19:25:38.711466   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:38.711935   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:38.711962   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:38.711890   30305 retry.go:31] will retry after 465.962788ms: waiting for machine to come up
	I0919 19:25:39.179211   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:39.179652   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:39.179684   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:39.179602   30305 retry.go:31] will retry after 602.174018ms: waiting for machine to come up
	I0919 19:25:39.783380   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:39.783868   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:39.783897   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:39.783820   30305 retry.go:31] will retry after 752.65735ms: waiting for machine to come up
	I0919 19:25:40.537821   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:40.538325   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:40.538351   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:40.538278   30305 retry.go:31] will retry after 659.774912ms: waiting for machine to come up
	I0919 19:25:41.200055   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:41.200443   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:41.200472   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:41.200416   30305 retry.go:31] will retry after 933.838274ms: waiting for machine to come up
	I0919 19:25:42.135781   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:42.136230   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:42.136260   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:42.136180   30305 retry.go:31] will retry after 1.469374699s: waiting for machine to come up
	I0919 19:25:43.606700   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:43.607102   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:43.607128   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:43.607064   30305 retry.go:31] will retry after 1.652950342s: waiting for machine to come up
	I0919 19:25:45.261341   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:45.261788   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:45.261815   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:45.261744   30305 retry.go:31] will retry after 1.905868131s: waiting for machine to come up
	I0919 19:25:47.169717   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:47.170193   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:47.170220   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:47.170129   30305 retry.go:31] will retry after 2.065748875s: waiting for machine to come up
	I0919 19:25:49.238320   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:49.238667   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:49.238694   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:49.238621   30305 retry.go:31] will retry after 2.815922548s: waiting for machine to come up
	I0919 19:25:52.055810   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:52.056201   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:52.056225   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:52.056152   30305 retry.go:31] will retry after 2.765202997s: waiting for machine to come up
	I0919 19:25:54.825094   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:54.825576   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find current IP address of domain ha-076992-m02 in network mk-ha-076992
	I0919 19:25:54.825607   29946 main.go:141] libmachine: (ha-076992-m02) DBG | I0919 19:25:54.825532   30305 retry.go:31] will retry after 3.746769052s: waiting for machine to come up
	I0919 19:25:58.574430   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.574995   29946 main.go:141] libmachine: (ha-076992-m02) Found IP for machine: 192.168.39.232
	I0919 19:25:58.575023   29946 main.go:141] libmachine: (ha-076992-m02) Reserving static IP address...
	I0919 19:25:58.575036   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has current primary IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.575526   29946 main.go:141] libmachine: (ha-076992-m02) DBG | unable to find host DHCP lease matching {name: "ha-076992-m02", mac: "52:54:00:5f:39:42", ip: "192.168.39.232"} in network mk-ha-076992
	I0919 19:25:58.646823   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Getting to WaitForSSH function...
	I0919 19:25:58.646849   29946 main.go:141] libmachine: (ha-076992-m02) Reserved static IP address: 192.168.39.232
	I0919 19:25:58.646862   29946 main.go:141] libmachine: (ha-076992-m02) Waiting for SSH to be available...
	I0919 19:25:58.649682   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.650123   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:58.650200   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.650328   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Using SSH client type: external
	I0919 19:25:58.650350   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa (-rw-------)
	I0919 19:25:58.650383   29946 main.go:141] libmachine: (ha-076992-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 19:25:58.650401   29946 main.go:141] libmachine: (ha-076992-m02) DBG | About to run SSH command:
	I0919 19:25:58.650416   29946 main.go:141] libmachine: (ha-076992-m02) DBG | exit 0
	I0919 19:25:58.777771   29946 main.go:141] libmachine: (ha-076992-m02) DBG | SSH cmd err, output: <nil>: 
	I0919 19:25:58.778064   29946 main.go:141] libmachine: (ha-076992-m02) KVM machine creation complete!
	I0919 19:25:58.778379   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetConfigRaw
	I0919 19:25:58.778927   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:58.779131   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:58.779306   29946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 19:25:58.779329   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetState
	I0919 19:25:58.780634   29946 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 19:25:58.780650   29946 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 19:25:58.780657   29946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 19:25:58.780663   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:58.783144   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.783573   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:58.783595   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.783851   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:58.784010   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.784179   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.784350   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:58.784515   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:58.784730   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:58.784742   29946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 19:25:58.888256   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:58.888282   29946 main.go:141] libmachine: Detecting the provisioner...
	I0919 19:25:58.888293   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:58.891062   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.891412   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:58.891443   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:58.891627   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:58.891808   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.891961   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:58.892118   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:58.892285   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:58.892465   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:58.892476   29946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 19:25:58.997853   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0919 19:25:58.997904   29946 main.go:141] libmachine: found compatible host: buildroot
	I0919 19:25:58.997917   29946 main.go:141] libmachine: Provisioning with buildroot...
	I0919 19:25:58.997926   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:58.998154   29946 buildroot.go:166] provisioning hostname "ha-076992-m02"
	I0919 19:25:58.998180   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:58.998363   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.001218   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.001600   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.001625   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.001769   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.001924   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.002057   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.002199   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.002363   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.002512   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.002523   29946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992-m02 && echo "ha-076992-m02" | sudo tee /etc/hostname
	I0919 19:25:59.119914   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992-m02
	
	I0919 19:25:59.119943   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.122597   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.122932   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.122959   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.123102   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.123288   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.123386   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.123535   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.123663   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.123816   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.123831   29946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:25:59.234249   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:25:59.234283   29946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:25:59.234304   29946 buildroot.go:174] setting up certificates
	I0919 19:25:59.234313   29946 provision.go:84] configureAuth start
	I0919 19:25:59.234321   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetMachineName
	I0919 19:25:59.234593   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:25:59.237517   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.237906   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.237938   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.238086   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.240541   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.240911   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.240937   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.241052   29946 provision.go:143] copyHostCerts
	I0919 19:25:59.241116   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:59.241157   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:25:59.241168   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:25:59.241245   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:25:59.241332   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:59.241361   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:25:59.241371   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:25:59.241408   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:25:59.241468   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:59.241492   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:25:59.241501   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:25:59.241533   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:25:59.241596   29946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992-m02 san=[127.0.0.1 192.168.39.232 ha-076992-m02 localhost minikube]
	I0919 19:25:59.357826   29946 provision.go:177] copyRemoteCerts
	I0919 19:25:59.357894   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:25:59.357924   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.360530   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.360884   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.360911   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.361149   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.361317   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.361482   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.361595   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:25:59.443240   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:25:59.443310   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:25:59.469433   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:25:59.469519   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 19:25:59.495952   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:25:59.496024   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:25:59.522724   29946 provision.go:87] duration metric: took 288.400561ms to configureAuth
	I0919 19:25:59.522748   29946 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:25:59.522917   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:59.522985   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.525520   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.525889   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.525912   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.526077   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.526238   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.526387   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.526517   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.526656   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.526814   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.526826   29946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:25:59.752869   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:25:59.752893   29946 main.go:141] libmachine: Checking connection to Docker...
	I0919 19:25:59.752905   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetURL
	I0919 19:25:59.754292   29946 main.go:141] libmachine: (ha-076992-m02) DBG | Using libvirt version 6000000
	I0919 19:25:59.756429   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.756753   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.756775   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.756952   29946 main.go:141] libmachine: Docker is up and running!
	I0919 19:25:59.756967   29946 main.go:141] libmachine: Reticulating splines...
	I0919 19:25:59.756974   29946 client.go:171] duration metric: took 23.20799249s to LocalClient.Create
	I0919 19:25:59.756996   29946 start.go:167] duration metric: took 23.208049551s to libmachine.API.Create "ha-076992"
	I0919 19:25:59.757009   29946 start.go:293] postStartSetup for "ha-076992-m02" (driver="kvm2")
	I0919 19:25:59.757026   29946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:25:59.757049   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:25:59.757304   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:25:59.757329   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.759641   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.760058   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.760084   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.760219   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.760398   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.760511   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.760656   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:25:59.843621   29946 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:25:59.848206   29946 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:25:59.848232   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:25:59.848296   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:25:59.848392   29946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:25:59.848404   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:25:59.848515   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:25:59.858316   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:25:59.885251   29946 start.go:296] duration metric: took 128.22453ms for postStartSetup
	I0919 19:25:59.885295   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetConfigRaw
	I0919 19:25:59.885821   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:25:59.888318   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.888680   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.888708   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.888945   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:25:59.889154   29946 start.go:128] duration metric: took 23.358320855s to createHost
	I0919 19:25:59.889176   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:25:59.891311   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.891643   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:25:59.891660   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:25:59.891792   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:25:59.891944   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.892068   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:25:59.892176   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:25:59.892294   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:25:59.892443   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0919 19:25:59.892452   29946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:26:00.002053   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726773959.961389731
	
	I0919 19:26:00.002074   29946 fix.go:216] guest clock: 1726773959.961389731
	I0919 19:26:00.002082   29946 fix.go:229] Guest: 2024-09-19 19:25:59.961389731 +0000 UTC Remote: 2024-09-19 19:25:59.889165721 +0000 UTC m=+69.375202371 (delta=72.22401ms)
	I0919 19:26:00.002098   29946 fix.go:200] guest clock delta is within tolerance: 72.22401ms
	I0919 19:26:00.002103   29946 start.go:83] releasing machines lock for "ha-076992-m02", held for 23.47139118s
	I0919 19:26:00.002120   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.002405   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:26:00.005381   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.005748   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:00.005768   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.008103   29946 out.go:177] * Found network options:
	I0919 19:26:00.009556   29946 out.go:177]   - NO_PROXY=192.168.39.173
	W0919 19:26:00.010768   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:26:00.010799   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.011365   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.011545   29946 main.go:141] libmachine: (ha-076992-m02) Calling .DriverName
	I0919 19:26:00.011641   29946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:26:00.011680   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	W0919 19:26:00.011835   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:26:00.011913   29946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:26:00.011935   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHHostname
	I0919 19:26:00.014635   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.014741   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.015053   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:00.015078   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.015105   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:00.015122   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:00.015192   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:26:00.015389   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHPort
	I0919 19:26:00.015425   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:26:00.015551   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:26:00.015586   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHKeyPath
	I0919 19:26:00.015680   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetSSHUsername
	I0919 19:26:00.015686   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:26:00.015847   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m02/id_rsa Username:docker}
	I0919 19:26:00.243733   29946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:26:00.250260   29946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:26:00.250318   29946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:26:00.266157   29946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 19:26:00.266187   29946 start.go:495] detecting cgroup driver to use...
	I0919 19:26:00.266257   29946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:26:00.284373   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:26:00.299098   29946 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:26:00.299161   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:26:00.313776   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:26:00.328144   29946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:26:00.450118   29946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:26:00.592879   29946 docker.go:233] disabling docker service ...
	I0919 19:26:00.592942   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:26:00.607656   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:26:00.620367   29946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:26:00.756551   29946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:26:00.888081   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:26:00.901911   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:26:00.920807   29946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:26:00.920876   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.931652   29946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:26:00.931715   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.944741   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.955512   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.966422   29946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:26:00.977466   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:00.988029   29946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:01.011140   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:26:01.022261   29946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:26:01.031891   29946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 19:26:01.031944   29946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 19:26:01.044785   29946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:26:01.054444   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:26:01.182828   29946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:26:01.272829   29946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:26:01.272907   29946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:26:01.277937   29946 start.go:563] Will wait 60s for crictl version
	I0919 19:26:01.277997   29946 ssh_runner.go:195] Run: which crictl
	I0919 19:26:01.282022   29946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:26:01.321749   29946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:26:01.321825   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:26:01.350681   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:26:01.380754   29946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:26:01.382497   29946 out.go:177]   - env NO_PROXY=192.168.39.173
	I0919 19:26:01.383753   29946 main.go:141] libmachine: (ha-076992-m02) Calling .GetIP
	I0919 19:26:01.386332   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:01.386661   29946 main.go:141] libmachine: (ha-076992-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:39:42", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:51 +0000 UTC Type:0 Mac:52:54:00:5f:39:42 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-076992-m02 Clientid:01:52:54:00:5f:39:42}
	I0919 19:26:01.386690   29946 main.go:141] libmachine: (ha-076992-m02) DBG | domain ha-076992-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:5f:39:42 in network mk-ha-076992
	I0919 19:26:01.386880   29946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:26:01.391190   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:26:01.403767   29946 mustload.go:65] Loading cluster: ha-076992
	I0919 19:26:01.403960   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:26:01.404199   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:01.404248   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:01.418919   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0919 19:26:01.419393   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:01.419861   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:01.419882   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:01.420168   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:01.420331   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:26:01.421875   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:26:01.422160   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:01.422195   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:01.437017   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0919 19:26:01.437468   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:01.437893   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:01.437915   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:01.438300   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:01.438497   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:26:01.438639   29946 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.232
	I0919 19:26:01.438648   29946 certs.go:194] generating shared ca certs ...
	I0919 19:26:01.438661   29946 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:26:01.438777   29946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:26:01.438815   29946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:26:01.438824   29946 certs.go:256] generating profile certs ...
	I0919 19:26:01.438904   29946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:26:01.438934   29946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548
	I0919 19:26:01.438954   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.232 192.168.39.254]
	I0919 19:26:01.570629   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548 ...
	I0919 19:26:01.570661   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548: {Name:mk20c396761e9ccfefb28b7b4e5db83bbd0de404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:26:01.570827   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548 ...
	I0919 19:26:01.570840   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548: {Name:mkbba11c725a3524e5cbb6109330222760dc216a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:26:01.570911   29946 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.52cea548 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:26:01.571040   29946 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.52cea548 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:26:01.571164   29946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:26:01.571178   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:26:01.571191   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:26:01.571239   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:26:01.571263   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:26:01.571276   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:26:01.571286   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:26:01.571298   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:26:01.571308   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:26:01.571356   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:26:01.571390   29946 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:26:01.571399   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:26:01.571419   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:26:01.571441   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:26:01.571462   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:26:01.571500   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:26:01.571524   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:26:01.571538   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:01.571552   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:26:01.571582   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:26:01.574554   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:01.574961   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:26:01.574989   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:01.575190   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:26:01.575379   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:26:01.575503   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:26:01.575643   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:26:01.649555   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 19:26:01.654610   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 19:26:01.666818   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 19:26:01.670813   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 19:26:01.681979   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 19:26:01.686362   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 19:26:01.696685   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 19:26:01.700738   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 19:26:01.711578   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 19:26:01.715684   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 19:26:01.727402   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 19:26:01.731821   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 19:26:01.743441   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:26:01.772076   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:26:01.796535   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:26:01.821191   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:26:01.847148   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 19:26:01.871474   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 19:26:01.894939   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:26:01.918215   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:26:01.943385   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:26:01.968566   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:26:01.992928   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:26:02.017141   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 19:26:02.033989   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 19:26:02.051070   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 19:26:02.067651   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 19:26:02.084618   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 19:26:02.100924   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 19:26:02.117332   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 19:26:02.133574   29946 ssh_runner.go:195] Run: openssl version
	I0919 19:26:02.139079   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:26:02.149396   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:26:02.153709   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:26:02.153753   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:26:02.159372   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:26:02.169469   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:26:02.179773   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:26:02.184096   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:26:02.184140   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:26:02.189599   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:26:02.199935   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:26:02.210371   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:02.214711   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:02.214755   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:26:02.220241   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:26:02.230545   29946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:26:02.234717   29946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 19:26:02.234762   29946 kubeadm.go:934] updating node {m02 192.168.39.232 8443 v1.31.1 crio true true} ...
	I0919 19:26:02.234833   29946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:26:02.234855   29946 kube-vip.go:115] generating kube-vip config ...
	I0919 19:26:02.234882   29946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:26:02.250138   29946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:26:02.250208   29946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:26:02.250263   29946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:26:02.260294   29946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0919 19:26:02.260356   29946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0919 19:26:02.271123   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0919 19:26:02.271155   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:26:02.271170   29946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0919 19:26:02.271131   29946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0919 19:26:02.271252   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:26:02.275907   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0919 19:26:02.275932   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0919 19:26:04.726131   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:26:04.741861   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:26:04.741942   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:26:04.747080   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0919 19:26:04.747110   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0919 19:26:05.138782   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:26:05.138864   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:26:05.143906   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0919 19:26:05.143942   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0919 19:26:05.391094   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 19:26:05.402470   29946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 19:26:05.419083   29946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:26:05.435530   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:26:05.452330   29946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:26:05.456142   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:26:05.468600   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:26:05.590348   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:26:05.607783   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:26:05.608143   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:05.608190   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:05.622922   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I0919 19:26:05.623374   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:05.623806   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:05.623826   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:05.624115   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:05.624311   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:26:05.624422   29946 start.go:317] joinCluster: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:26:05.624512   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 19:26:05.624535   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:26:05.627671   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:05.628201   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:26:05.628231   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:26:05.628426   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:26:05.628584   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:26:05.628775   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:26:05.628963   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:26:05.783004   29946 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:26:05.783062   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k2rxz4.c60ygnjp1ja274y0 --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m02 --control-plane --apiserver-advertise-address=192.168.39.232 --apiserver-bind-port=8443"
	I0919 19:26:26.852036   29946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k2rxz4.c60ygnjp1ja274y0 --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m02 --control-plane --apiserver-advertise-address=192.168.39.232 --apiserver-bind-port=8443": (21.068945229s)
	I0919 19:26:26.852075   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 19:26:27.433951   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076992-m02 minikube.k8s.io/updated_at=2024_09_19T19_26_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=ha-076992 minikube.k8s.io/primary=false
	I0919 19:26:27.570431   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-076992-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 19:26:27.685911   29946 start.go:319] duration metric: took 22.061483301s to joinCluster
	I0919 19:26:27.685989   29946 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:26:27.686288   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:26:27.687539   29946 out.go:177] * Verifying Kubernetes components...
	I0919 19:26:27.689112   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:26:27.988894   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:26:28.006672   29946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:26:28.006924   29946 kapi.go:59] client config for ha-076992: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 19:26:28.006987   29946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.173:8443
	I0919 19:26:28.007186   29946 node_ready.go:35] waiting up to 6m0s for node "ha-076992-m02" to be "Ready" ...
	I0919 19:26:28.007293   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:28.007303   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:28.007314   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:28.007319   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:28.016756   29946 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0919 19:26:28.508333   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:28.508360   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:28.508372   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:28.508378   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:28.516049   29946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0919 19:26:29.007871   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:29.007898   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:29.007909   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:29.007913   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:29.011642   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:29.507413   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:29.507439   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:29.507447   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:29.507452   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:29.511660   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:30.007557   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:30.007578   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:30.007586   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:30.007591   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:30.011038   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:30.011598   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:30.508074   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:30.508099   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:30.508109   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:30.508112   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:30.511669   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:31.007638   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:31.007657   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:31.007665   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:31.007669   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:31.011418   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:31.507577   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:31.507605   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:31.507615   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:31.507626   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:31.511375   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:32.007718   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:32.007740   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:32.007749   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:32.007756   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:32.011650   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:32.012415   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:32.507637   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:32.507664   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:32.507676   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:32.507683   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:32.511755   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:33.008213   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:33.008234   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:33.008242   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:33.008246   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:33.011792   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:33.507684   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:33.507712   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:33.507720   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:33.507725   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:33.511853   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:34.007466   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:34.007488   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:34.007496   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:34.007500   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:34.012044   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:34.013001   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:34.508399   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:34.508419   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:34.508429   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:34.508434   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:34.512448   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:35.007796   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:35.007816   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:35.007824   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:35.007827   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:35.011062   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:35.508040   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:35.508073   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:35.508085   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:35.508091   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:35.511620   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:36.008049   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:36.008071   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:36.008079   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:36.008083   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:36.011403   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:36.508302   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:36.508324   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:36.508332   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:36.508337   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:36.511571   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:36.512300   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:37.007542   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:37.007564   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:37.007575   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:37.007582   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:37.011805   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:37.508050   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:37.508072   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:37.508080   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:37.508085   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:37.511538   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:38.007485   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:38.007511   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:38.007521   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:38.007533   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:38.011022   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:38.508063   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:38.508084   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:38.508092   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:38.508096   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:38.511492   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:39.008426   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:39.008451   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:39.008461   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:39.008467   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:39.012681   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:39.013788   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:39.508128   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:39.508151   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:39.508160   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:39.508165   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:39.512449   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:40.008306   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:40.008329   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:40.008337   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:40.008340   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:40.011906   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:40.508039   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:40.508061   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:40.508069   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:40.508074   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:40.511457   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:41.007677   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:41.007700   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:41.007709   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:41.007714   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:41.011506   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:41.507543   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:41.507564   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:41.507572   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:41.507578   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:41.510792   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:41.511569   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:42.008395   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:42.008418   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:42.008426   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:42.008430   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:42.011477   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:42.507458   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:42.507479   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:42.507487   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:42.507490   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:42.510874   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:43.008232   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:43.008255   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:43.008263   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:43.008266   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:43.011709   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:43.507746   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:43.507769   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:43.507778   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:43.507783   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:43.511265   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:43.511790   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:44.008252   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:44.008274   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:44.008284   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:44.008290   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:44.011544   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:44.507848   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:44.507875   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:44.507888   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:44.507894   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:44.510925   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:45.007953   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:45.007975   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:45.007983   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:45.007987   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:45.012020   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:45.508267   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:45.508293   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:45.508302   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:45.508309   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:45.512037   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:45.512623   29946 node_ready.go:53] node "ha-076992-m02" has status "Ready":"False"
	I0919 19:26:46.008137   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.008158   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.008165   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.008169   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.012104   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.012731   29946 node_ready.go:49] node "ha-076992-m02" has status "Ready":"True"
	I0919 19:26:46.012750   29946 node_ready.go:38] duration metric: took 18.005542928s for node "ha-076992-m02" to be "Ready" ...
	I0919 19:26:46.012759   29946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:26:46.012828   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:46.012838   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.012845   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.012851   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.017898   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:26:46.023994   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.024066   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bst8x
	I0919 19:26:46.024075   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.024083   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.024087   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.027015   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.027716   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.027731   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.027738   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.027742   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.030392   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.030831   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.030846   29946 pod_ready.go:82] duration metric: took 6.831386ms for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.030853   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.030893   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nbds4
	I0919 19:26:46.030900   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.030907   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.030911   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.033599   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.034104   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.034116   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.034122   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.034125   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.036185   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.036561   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.036576   29946 pod_ready.go:82] duration metric: took 5.717406ms for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.036584   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.036632   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992
	I0919 19:26:46.036642   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.036649   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.036654   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.038980   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.039515   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.039526   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.039532   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.039535   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.041804   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.042161   29946 pod_ready.go:93] pod "etcd-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.042174   29946 pod_ready.go:82] duration metric: took 5.5845ms for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.042181   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.042226   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m02
	I0919 19:26:46.042236   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.042242   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.042247   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.044464   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.045049   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.045081   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.045091   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.045095   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.047141   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:46.047566   29946 pod_ready.go:93] pod "etcd-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.047579   29946 pod_ready.go:82] duration metric: took 5.393087ms for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.047590   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.208948   29946 request.go:632] Waited for 161.306549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:26:46.209021   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:26:46.209027   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.209035   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.209041   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.212646   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.408764   29946 request.go:632] Waited for 195.355169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.408850   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:46.408861   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.408869   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.408878   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.412302   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.412793   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.412809   29946 pod_ready.go:82] duration metric: took 365.213979ms for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.412818   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.609130   29946 request.go:632] Waited for 196.247315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:26:46.609190   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:26:46.609195   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.609203   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.609205   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.612762   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.808777   29946 request.go:632] Waited for 195.389035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.808839   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:46.808844   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:46.808851   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:46.808854   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:46.812076   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:46.812671   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:46.812690   29946 pod_ready.go:82] duration metric: took 399.865629ms for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:46.812701   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.008865   29946 request.go:632] Waited for 196.089609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:26:47.008926   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:26:47.008931   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.008940   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.008944   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.012069   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:47.208226   29946 request.go:632] Waited for 195.285225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:47.208310   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:47.208321   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.208333   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.208340   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.211658   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:47.212273   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:47.212334   29946 pod_ready.go:82] duration metric: took 399.616733ms for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.212376   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.408402   29946 request.go:632] Waited for 195.932577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:26:47.408471   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:26:47.408476   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.408483   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.408488   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.412589   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:47.608602   29946 request.go:632] Waited for 195.361457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:47.608664   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:47.608670   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.608677   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.608683   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.611901   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:47.612434   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:47.612461   29946 pod_ready.go:82] duration metric: took 400.073222ms for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.612471   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:47.808579   29946 request.go:632] Waited for 196.032947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:26:47.808639   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:26:47.808647   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:47.808656   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:47.808663   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:47.811981   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.009006   29946 request.go:632] Waited for 196.338909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.009055   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.009072   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.009080   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.009088   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.012721   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.013205   29946 pod_ready.go:93] pod "kube-proxy-4d8dc" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:48.013223   29946 pod_ready.go:82] duration metric: took 400.743363ms for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.013233   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.208239   29946 request.go:632] Waited for 194.931072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:26:48.208327   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:26:48.208336   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.208357   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.208367   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.211846   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.408960   29946 request.go:632] Waited for 196.372524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:48.409013   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:48.409018   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.409025   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.409030   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.412044   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:26:48.412602   29946 pod_ready.go:93] pod "kube-proxy-tjtfj" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:48.412619   29946 pod_ready.go:82] duration metric: took 399.379304ms for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.412628   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.608768   29946 request.go:632] Waited for 196.067805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:26:48.608847   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:26:48.608853   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.608860   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.608867   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.612031   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.809050   29946 request.go:632] Waited for 196.389681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.809131   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:26:48.809137   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:48.809146   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:48.809149   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:48.812475   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:48.813104   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:48.813123   29946 pod_ready.go:82] duration metric: took 400.488766ms for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:48.813133   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:49.009203   29946 request.go:632] Waited for 196.009229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:26:49.009276   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:26:49.009288   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.009300   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.009312   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.013885   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:49.208739   29946 request.go:632] Waited for 194.357315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:49.208808   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:26:49.208813   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.208822   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.208826   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.212311   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:49.212795   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:26:49.212813   29946 pod_ready.go:82] duration metric: took 399.67345ms for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:26:49.212826   29946 pod_ready.go:39] duration metric: took 3.200055081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:26:49.212844   29946 api_server.go:52] waiting for apiserver process to appear ...
	I0919 19:26:49.212896   29946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:26:49.228541   29946 api_server.go:72] duration metric: took 21.542513425s to wait for apiserver process to appear ...
	I0919 19:26:49.228570   29946 api_server.go:88] waiting for apiserver healthz status ...
	I0919 19:26:49.228591   29946 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0919 19:26:49.232969   29946 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0919 19:26:49.233025   29946 round_trippers.go:463] GET https://192.168.39.173:8443/version
	I0919 19:26:49.233033   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.233040   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.233048   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.234012   29946 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0919 19:26:49.234106   29946 api_server.go:141] control plane version: v1.31.1
	I0919 19:26:49.234128   29946 api_server.go:131] duration metric: took 5.550093ms to wait for apiserver health ...
	I0919 19:26:49.234140   29946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 19:26:49.408598   29946 request.go:632] Waited for 174.396795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.408664   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.408670   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.408680   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.408697   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.414220   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:26:49.419326   29946 system_pods.go:59] 17 kube-system pods found
	I0919 19:26:49.419355   29946 system_pods.go:61] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:26:49.419366   29946 system_pods.go:61] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:26:49.419370   29946 system_pods.go:61] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:26:49.419374   29946 system_pods.go:61] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:26:49.419377   29946 system_pods.go:61] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:26:49.419380   29946 system_pods.go:61] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:26:49.419384   29946 system_pods.go:61] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:26:49.419389   29946 system_pods.go:61] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:26:49.419392   29946 system_pods.go:61] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:26:49.419395   29946 system_pods.go:61] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:26:49.419398   29946 system_pods.go:61] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:26:49.419402   29946 system_pods.go:61] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:26:49.419408   29946 system_pods.go:61] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:26:49.419411   29946 system_pods.go:61] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:26:49.419415   29946 system_pods.go:61] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:26:49.419421   29946 system_pods.go:61] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:26:49.419423   29946 system_pods.go:61] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:26:49.419429   29946 system_pods.go:74] duration metric: took 185.281302ms to wait for pod list to return data ...
	I0919 19:26:49.419438   29946 default_sa.go:34] waiting for default service account to be created ...
	I0919 19:26:49.608712   29946 request.go:632] Waited for 189.201717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:26:49.608795   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:26:49.608802   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.608809   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.608814   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.612612   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:26:49.612816   29946 default_sa.go:45] found service account: "default"
	I0919 19:26:49.612834   29946 default_sa.go:55] duration metric: took 193.38871ms for default service account to be created ...
	I0919 19:26:49.612845   29946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 19:26:49.808242   29946 request.go:632] Waited for 195.299973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.808306   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:26:49.808313   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:49.808327   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:49.808332   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:49.812812   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:49.816942   29946 system_pods.go:86] 17 kube-system pods found
	I0919 19:26:49.816968   29946 system_pods.go:89] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:26:49.816974   29946 system_pods.go:89] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:26:49.816978   29946 system_pods.go:89] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:26:49.816982   29946 system_pods.go:89] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:26:49.816987   29946 system_pods.go:89] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:26:49.816990   29946 system_pods.go:89] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:26:49.816994   29946 system_pods.go:89] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:26:49.816997   29946 system_pods.go:89] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:26:49.817001   29946 system_pods.go:89] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:26:49.817006   29946 system_pods.go:89] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:26:49.817009   29946 system_pods.go:89] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:26:49.817012   29946 system_pods.go:89] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:26:49.817015   29946 system_pods.go:89] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:26:49.817018   29946 system_pods.go:89] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:26:49.817022   29946 system_pods.go:89] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:26:49.817025   29946 system_pods.go:89] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:26:49.817027   29946 system_pods.go:89] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:26:49.817033   29946 system_pods.go:126] duration metric: took 204.182134ms to wait for k8s-apps to be running ...
	I0919 19:26:49.817042   29946 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 19:26:49.817110   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:26:49.832907   29946 system_svc.go:56] duration metric: took 15.854427ms WaitForService to wait for kubelet
	I0919 19:26:49.832937   29946 kubeadm.go:582] duration metric: took 22.146916375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:26:49.832959   29946 node_conditions.go:102] verifying NodePressure condition ...
	I0919 19:26:50.008290   29946 request.go:632] Waited for 175.255303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes
	I0919 19:26:50.008370   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes
	I0919 19:26:50.008377   29946 round_trippers.go:469] Request Headers:
	I0919 19:26:50.008395   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:26:50.008412   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:26:50.012639   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:26:50.013536   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:26:50.013563   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:26:50.013575   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:26:50.013578   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:26:50.013583   29946 node_conditions.go:105] duration metric: took 180.618254ms to run NodePressure ...
	I0919 19:26:50.013609   29946 start.go:241] waiting for startup goroutines ...
	I0919 19:26:50.013645   29946 start.go:255] writing updated cluster config ...
	I0919 19:26:50.016260   29946 out.go:201] 
	I0919 19:26:50.017506   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:26:50.017610   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:26:50.019348   29946 out.go:177] * Starting "ha-076992-m03" control-plane node in "ha-076992" cluster
	I0919 19:26:50.020726   29946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:26:50.020750   29946 cache.go:56] Caching tarball of preloaded images
	I0919 19:26:50.020859   29946 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:26:50.020870   29946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:26:50.020951   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:26:50.021276   29946 start.go:360] acquireMachinesLock for ha-076992-m03: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:26:50.021320   29946 start.go:364] duration metric: took 25.515µs to acquireMachinesLock for "ha-076992-m03"
	I0919 19:26:50.021340   29946 start.go:93] Provisioning new machine with config: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:26:50.021447   29946 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0919 19:26:50.023219   29946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 19:26:50.023316   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:26:50.023350   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:26:50.038933   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0919 19:26:50.039419   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:26:50.039936   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:26:50.039958   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:26:50.040292   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:26:50.040458   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:26:50.040592   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:26:50.040729   29946 start.go:159] libmachine.API.Create for "ha-076992" (driver="kvm2")
	I0919 19:26:50.040757   29946 client.go:168] LocalClient.Create starting
	I0919 19:26:50.040790   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem
	I0919 19:26:50.040824   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:26:50.040838   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:26:50.040886   29946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem
	I0919 19:26:50.040904   29946 main.go:141] libmachine: Decoding PEM data...
	I0919 19:26:50.040914   29946 main.go:141] libmachine: Parsing certificate...
	I0919 19:26:50.040933   29946 main.go:141] libmachine: Running pre-create checks...
	I0919 19:26:50.040941   29946 main.go:141] libmachine: (ha-076992-m03) Calling .PreCreateCheck
	I0919 19:26:50.041191   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetConfigRaw
	I0919 19:26:50.041557   29946 main.go:141] libmachine: Creating machine...
	I0919 19:26:50.041570   29946 main.go:141] libmachine: (ha-076992-m03) Calling .Create
	I0919 19:26:50.041718   29946 main.go:141] libmachine: (ha-076992-m03) Creating KVM machine...
	I0919 19:26:50.042959   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found existing default KVM network
	I0919 19:26:50.043089   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found existing private KVM network mk-ha-076992
	I0919 19:26:50.043212   29946 main.go:141] libmachine: (ha-076992-m03) Setting up store path in /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03 ...
	I0919 19:26:50.043237   29946 main.go:141] libmachine: (ha-076992-m03) Building disk image from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 19:26:50.043301   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.043202   30696 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:26:50.043388   29946 main.go:141] libmachine: (ha-076992-m03) Downloading /home/jenkins/minikube-integration/19664-7917/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0919 19:26:50.272805   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.272669   30696 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa...
	I0919 19:26:50.366932   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.366796   30696 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/ha-076992-m03.rawdisk...
	I0919 19:26:50.366967   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Writing magic tar header
	I0919 19:26:50.366980   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Writing SSH key tar header
	I0919 19:26:50.366998   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:50.366905   30696 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03 ...
	I0919 19:26:50.367013   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03
	I0919 19:26:50.367090   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03 (perms=drwx------)
	I0919 19:26:50.367125   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube/machines (perms=drwxr-xr-x)
	I0919 19:26:50.367136   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube/machines
	I0919 19:26:50.367162   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917/.minikube (perms=drwxr-xr-x)
	I0919 19:26:50.367182   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:26:50.367196   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration/19664-7917 (perms=drwxrwxr-x)
	I0919 19:26:50.367208   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19664-7917
	I0919 19:26:50.367220   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 19:26:50.367228   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home/jenkins
	I0919 19:26:50.367240   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Checking permissions on dir: /home
	I0919 19:26:50.367249   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Skipping /home - not owner
	I0919 19:26:50.367259   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 19:26:50.367272   29946 main.go:141] libmachine: (ha-076992-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 19:26:50.367282   29946 main.go:141] libmachine: (ha-076992-m03) Creating domain...
	I0919 19:26:50.368245   29946 main.go:141] libmachine: (ha-076992-m03) define libvirt domain using xml: 
	I0919 19:26:50.368263   29946 main.go:141] libmachine: (ha-076992-m03) <domain type='kvm'>
	I0919 19:26:50.368270   29946 main.go:141] libmachine: (ha-076992-m03)   <name>ha-076992-m03</name>
	I0919 19:26:50.368275   29946 main.go:141] libmachine: (ha-076992-m03)   <memory unit='MiB'>2200</memory>
	I0919 19:26:50.368280   29946 main.go:141] libmachine: (ha-076992-m03)   <vcpu>2</vcpu>
	I0919 19:26:50.368287   29946 main.go:141] libmachine: (ha-076992-m03)   <features>
	I0919 19:26:50.368314   29946 main.go:141] libmachine: (ha-076992-m03)     <acpi/>
	I0919 19:26:50.368335   29946 main.go:141] libmachine: (ha-076992-m03)     <apic/>
	I0919 19:26:50.368360   29946 main.go:141] libmachine: (ha-076992-m03)     <pae/>
	I0919 19:26:50.368384   29946 main.go:141] libmachine: (ha-076992-m03)     
	I0919 19:26:50.368405   29946 main.go:141] libmachine: (ha-076992-m03)   </features>
	I0919 19:26:50.368416   29946 main.go:141] libmachine: (ha-076992-m03)   <cpu mode='host-passthrough'>
	I0919 19:26:50.368427   29946 main.go:141] libmachine: (ha-076992-m03)   
	I0919 19:26:50.368434   29946 main.go:141] libmachine: (ha-076992-m03)   </cpu>
	I0919 19:26:50.368446   29946 main.go:141] libmachine: (ha-076992-m03)   <os>
	I0919 19:26:50.368453   29946 main.go:141] libmachine: (ha-076992-m03)     <type>hvm</type>
	I0919 19:26:50.368468   29946 main.go:141] libmachine: (ha-076992-m03)     <boot dev='cdrom'/>
	I0919 19:26:50.368486   29946 main.go:141] libmachine: (ha-076992-m03)     <boot dev='hd'/>
	I0919 19:26:50.368498   29946 main.go:141] libmachine: (ha-076992-m03)     <bootmenu enable='no'/>
	I0919 19:26:50.368507   29946 main.go:141] libmachine: (ha-076992-m03)   </os>
	I0919 19:26:50.368515   29946 main.go:141] libmachine: (ha-076992-m03)   <devices>
	I0919 19:26:50.368519   29946 main.go:141] libmachine: (ha-076992-m03)     <disk type='file' device='cdrom'>
	I0919 19:26:50.368529   29946 main.go:141] libmachine: (ha-076992-m03)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/boot2docker.iso'/>
	I0919 19:26:50.368538   29946 main.go:141] libmachine: (ha-076992-m03)       <target dev='hdc' bus='scsi'/>
	I0919 19:26:50.368548   29946 main.go:141] libmachine: (ha-076992-m03)       <readonly/>
	I0919 19:26:50.368562   29946 main.go:141] libmachine: (ha-076992-m03)     </disk>
	I0919 19:26:50.368574   29946 main.go:141] libmachine: (ha-076992-m03)     <disk type='file' device='disk'>
	I0919 19:26:50.368585   29946 main.go:141] libmachine: (ha-076992-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 19:26:50.368595   29946 main.go:141] libmachine: (ha-076992-m03)       <source file='/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/ha-076992-m03.rawdisk'/>
	I0919 19:26:50.368602   29946 main.go:141] libmachine: (ha-076992-m03)       <target dev='hda' bus='virtio'/>
	I0919 19:26:50.368606   29946 main.go:141] libmachine: (ha-076992-m03)     </disk>
	I0919 19:26:50.368613   29946 main.go:141] libmachine: (ha-076992-m03)     <interface type='network'>
	I0919 19:26:50.368618   29946 main.go:141] libmachine: (ha-076992-m03)       <source network='mk-ha-076992'/>
	I0919 19:26:50.368625   29946 main.go:141] libmachine: (ha-076992-m03)       <model type='virtio'/>
	I0919 19:26:50.368637   29946 main.go:141] libmachine: (ha-076992-m03)     </interface>
	I0919 19:26:50.368648   29946 main.go:141] libmachine: (ha-076992-m03)     <interface type='network'>
	I0919 19:26:50.368657   29946 main.go:141] libmachine: (ha-076992-m03)       <source network='default'/>
	I0919 19:26:50.368666   29946 main.go:141] libmachine: (ha-076992-m03)       <model type='virtio'/>
	I0919 19:26:50.368678   29946 main.go:141] libmachine: (ha-076992-m03)     </interface>
	I0919 19:26:50.368688   29946 main.go:141] libmachine: (ha-076992-m03)     <serial type='pty'>
	I0919 19:26:50.368694   29946 main.go:141] libmachine: (ha-076992-m03)       <target port='0'/>
	I0919 19:26:50.368700   29946 main.go:141] libmachine: (ha-076992-m03)     </serial>
	I0919 19:26:50.368705   29946 main.go:141] libmachine: (ha-076992-m03)     <console type='pty'>
	I0919 19:26:50.368713   29946 main.go:141] libmachine: (ha-076992-m03)       <target type='serial' port='0'/>
	I0919 19:26:50.368722   29946 main.go:141] libmachine: (ha-076992-m03)     </console>
	I0919 19:26:50.368736   29946 main.go:141] libmachine: (ha-076992-m03)     <rng model='virtio'>
	I0919 19:26:50.368755   29946 main.go:141] libmachine: (ha-076992-m03)       <backend model='random'>/dev/random</backend>
	I0919 19:26:50.368772   29946 main.go:141] libmachine: (ha-076992-m03)     </rng>
	I0919 19:26:50.368781   29946 main.go:141] libmachine: (ha-076992-m03)     
	I0919 19:26:50.368790   29946 main.go:141] libmachine: (ha-076992-m03)     
	I0919 19:26:50.368799   29946 main.go:141] libmachine: (ha-076992-m03)   </devices>
	I0919 19:26:50.368809   29946 main.go:141] libmachine: (ha-076992-m03) </domain>
	I0919 19:26:50.368819   29946 main.go:141] libmachine: (ha-076992-m03) 
	I0919 19:26:50.375827   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:e1:f4:70 in network default
	I0919 19:26:50.376416   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:50.376447   29946 main.go:141] libmachine: (ha-076992-m03) Ensuring networks are active...
	I0919 19:26:50.377119   29946 main.go:141] libmachine: (ha-076992-m03) Ensuring network default is active
	I0919 19:26:50.377451   29946 main.go:141] libmachine: (ha-076992-m03) Ensuring network mk-ha-076992 is active
	I0919 19:26:50.377904   29946 main.go:141] libmachine: (ha-076992-m03) Getting domain xml...
	I0919 19:26:50.378666   29946 main.go:141] libmachine: (ha-076992-m03) Creating domain...
	I0919 19:26:51.611728   29946 main.go:141] libmachine: (ha-076992-m03) Waiting to get IP...
	I0919 19:26:51.612561   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:51.612946   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:51.612965   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:51.612926   30696 retry.go:31] will retry after 229.04121ms: waiting for machine to come up
	I0919 19:26:51.843282   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:51.843786   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:51.843820   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:51.843734   30696 retry.go:31] will retry after 364.805682ms: waiting for machine to come up
	I0919 19:26:52.210136   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:52.210584   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:52.210610   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:52.210546   30696 retry.go:31] will retry after 345.198613ms: waiting for machine to come up
	I0919 19:26:52.556935   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:52.557405   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:52.557428   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:52.557338   30696 retry.go:31] will retry after 457.195059ms: waiting for machine to come up
	I0919 19:26:53.015946   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:53.016403   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:53.016423   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:53.016360   30696 retry.go:31] will retry after 743.82706ms: waiting for machine to come up
	I0919 19:26:53.762468   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:53.762847   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:53.762870   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:53.762817   30696 retry.go:31] will retry after 795.902123ms: waiting for machine to come up
	I0919 19:26:54.560380   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:54.560862   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:54.560884   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:54.560818   30696 retry.go:31] will retry after 723.847816ms: waiting for machine to come up
	I0919 19:26:55.285997   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:55.286544   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:55.286569   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:55.286475   30696 retry.go:31] will retry after 1.372100892s: waiting for machine to come up
	I0919 19:26:56.660980   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:56.661391   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:56.661417   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:56.661373   30696 retry.go:31] will retry after 1.303463786s: waiting for machine to come up
	I0919 19:26:57.966063   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:57.966500   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:57.966528   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:57.966449   30696 retry.go:31] will retry after 1.418881121s: waiting for machine to come up
	I0919 19:26:59.387181   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:26:59.387696   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:26:59.387727   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:26:59.387636   30696 retry.go:31] will retry after 2.01324992s: waiting for machine to come up
	I0919 19:27:01.402316   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:01.402776   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:27:01.402804   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:27:01.402729   30696 retry.go:31] will retry after 3.126162565s: waiting for machine to come up
	I0919 19:27:04.533132   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:04.533523   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:27:04.533546   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:27:04.533483   30696 retry.go:31] will retry after 3.645979241s: waiting for machine to come up
	I0919 19:27:08.184963   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:08.185442   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find current IP address of domain ha-076992-m03 in network mk-ha-076992
	I0919 19:27:08.185465   29946 main.go:141] libmachine: (ha-076992-m03) DBG | I0919 19:27:08.185392   30696 retry.go:31] will retry after 4.695577454s: waiting for machine to come up
	I0919 19:27:12.882164   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.882571   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has current primary IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.882589   29946 main.go:141] libmachine: (ha-076992-m03) Found IP for machine: 192.168.39.66
	I0919 19:27:12.882601   29946 main.go:141] libmachine: (ha-076992-m03) Reserving static IP address...
	I0919 19:27:12.882993   29946 main.go:141] libmachine: (ha-076992-m03) DBG | unable to find host DHCP lease matching {name: "ha-076992-m03", mac: "52:54:00:6a:be:a6", ip: "192.168.39.66"} in network mk-ha-076992
	I0919 19:27:12.954002   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Getting to WaitForSSH function...
	I0919 19:27:12.954035   29946 main.go:141] libmachine: (ha-076992-m03) Reserved static IP address: 192.168.39.66
	I0919 19:27:12.954075   29946 main.go:141] libmachine: (ha-076992-m03) Waiting for SSH to be available...
	I0919 19:27:12.956412   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.956840   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:12.956865   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:12.957025   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Using SSH client type: external
	I0919 19:27:12.957056   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa (-rw-------)
	I0919 19:27:12.957197   29946 main.go:141] libmachine: (ha-076992-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 19:27:12.957216   29946 main.go:141] libmachine: (ha-076992-m03) DBG | About to run SSH command:
	I0919 19:27:12.957228   29946 main.go:141] libmachine: (ha-076992-m03) DBG | exit 0
	I0919 19:27:13.081333   29946 main.go:141] libmachine: (ha-076992-m03) DBG | SSH cmd err, output: <nil>: 
	I0919 19:27:13.081616   29946 main.go:141] libmachine: (ha-076992-m03) KVM machine creation complete!
	I0919 19:27:13.081958   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetConfigRaw
	I0919 19:27:13.082498   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:13.082685   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:13.082851   29946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 19:27:13.082866   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetState
	I0919 19:27:13.084230   29946 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 19:27:13.084246   29946 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 19:27:13.084253   29946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 19:27:13.084261   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.086332   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.086661   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.086683   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.086775   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.086955   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.087082   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.087204   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.087369   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.087586   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.087601   29946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 19:27:13.188711   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:27:13.188735   29946 main.go:141] libmachine: Detecting the provisioner...
	I0919 19:27:13.188748   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.191413   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.191717   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.191744   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.191916   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.192073   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.192197   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.192317   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.192502   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.192705   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.192716   29946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 19:27:13.293829   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0919 19:27:13.293892   29946 main.go:141] libmachine: found compatible host: buildroot
	I0919 19:27:13.293901   29946 main.go:141] libmachine: Provisioning with buildroot...
	I0919 19:27:13.293911   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:27:13.294179   29946 buildroot.go:166] provisioning hostname "ha-076992-m03"
	I0919 19:27:13.294206   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:27:13.294379   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.297332   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.297705   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.297731   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.297878   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.298121   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.298268   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.298407   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.298593   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.298797   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.298812   29946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992-m03 && echo "ha-076992-m03" | sudo tee /etc/hostname
	I0919 19:27:13.417925   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992-m03
	
	I0919 19:27:13.417953   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.421043   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.421515   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.421544   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.421759   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.421977   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.422158   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.422267   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.422417   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.422625   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.422650   29946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:27:13.534273   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:27:13.534305   29946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:27:13.534319   29946 buildroot.go:174] setting up certificates
	I0919 19:27:13.534328   29946 provision.go:84] configureAuth start
	I0919 19:27:13.534336   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetMachineName
	I0919 19:27:13.534593   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:13.536896   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.537258   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.537285   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.537378   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.539354   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.539732   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.539755   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.539949   29946 provision.go:143] copyHostCerts
	I0919 19:27:13.539973   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:27:13.540002   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:27:13.540010   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:27:13.540074   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:27:13.540169   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:27:13.540188   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:27:13.540192   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:27:13.540218   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:27:13.540272   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:27:13.540289   29946 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:27:13.540295   29946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:27:13.540317   29946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:27:13.540366   29946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992-m03 san=[127.0.0.1 192.168.39.66 ha-076992-m03 localhost minikube]
	I0919 19:27:13.664258   29946 provision.go:177] copyRemoteCerts
	I0919 19:27:13.664317   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:27:13.664340   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.666694   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.666972   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.667004   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.667138   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.667349   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.667524   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.667655   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:13.747501   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:27:13.747575   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:27:13.775047   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:27:13.775117   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 19:27:13.799961   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:27:13.800042   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:27:13.824466   29946 provision.go:87] duration metric: took 290.126442ms to configureAuth
	I0919 19:27:13.824491   29946 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:27:13.824710   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:27:13.824790   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:13.827490   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.827892   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:13.827922   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:13.828063   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:13.828244   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.828410   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:13.828560   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:13.828704   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:13.828855   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:13.828868   29946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:27:14.055519   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:27:14.055549   29946 main.go:141] libmachine: Checking connection to Docker...
	I0919 19:27:14.055560   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetURL
	I0919 19:27:14.056949   29946 main.go:141] libmachine: (ha-076992-m03) DBG | Using libvirt version 6000000
	I0919 19:27:14.059445   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.059710   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.059746   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.059910   29946 main.go:141] libmachine: Docker is up and running!
	I0919 19:27:14.059934   29946 main.go:141] libmachine: Reticulating splines...
	I0919 19:27:14.059941   29946 client.go:171] duration metric: took 24.019173404s to LocalClient.Create
	I0919 19:27:14.059965   29946 start.go:167] duration metric: took 24.019236466s to libmachine.API.Create "ha-076992"
	I0919 19:27:14.059975   29946 start.go:293] postStartSetup for "ha-076992-m03" (driver="kvm2")
	I0919 19:27:14.059989   29946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:27:14.060019   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.060324   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:27:14.060351   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:14.062476   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.062770   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.062797   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.062880   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.063087   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.063268   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.063425   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:14.148901   29946 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:27:14.153351   29946 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:27:14.153376   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:27:14.153447   29946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:27:14.153516   29946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:27:14.153525   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:27:14.153603   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:27:14.163847   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:27:14.190891   29946 start.go:296] duration metric: took 130.895498ms for postStartSetup
	I0919 19:27:14.190969   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetConfigRaw
	I0919 19:27:14.191591   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:14.194303   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.194676   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.194706   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.195041   29946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:27:14.195249   29946 start.go:128] duration metric: took 24.173788829s to createHost
	I0919 19:27:14.195296   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:14.197299   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.197596   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.197621   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.197722   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.197880   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.197999   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.198111   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.198242   29946 main.go:141] libmachine: Using SSH client type: native
	I0919 19:27:14.198397   29946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0919 19:27:14.198407   29946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:27:14.302149   29946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726774034.280175121
	
	I0919 19:27:14.302173   29946 fix.go:216] guest clock: 1726774034.280175121
	I0919 19:27:14.302181   29946 fix.go:229] Guest: 2024-09-19 19:27:14.280175121 +0000 UTC Remote: 2024-09-19 19:27:14.195262057 +0000 UTC m=+143.681298720 (delta=84.913064ms)
	I0919 19:27:14.302206   29946 fix.go:200] guest clock delta is within tolerance: 84.913064ms
	I0919 19:27:14.302210   29946 start.go:83] releasing machines lock for "ha-076992-m03", held for 24.280882386s
	I0919 19:27:14.302236   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.302488   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:14.305506   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.305858   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.305888   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.308327   29946 out.go:177] * Found network options:
	I0919 19:27:14.309814   29946 out.go:177]   - NO_PROXY=192.168.39.173,192.168.39.232
	W0919 19:27:14.311323   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 19:27:14.311345   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:27:14.311387   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.311977   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.312171   29946 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:27:14.312284   29946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:27:14.312326   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	W0919 19:27:14.312356   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 19:27:14.312379   29946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:27:14.312445   29946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:27:14.312467   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:27:14.315326   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315477   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315739   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.315765   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315795   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:14.315810   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:14.315916   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.316063   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:27:14.316081   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.316266   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.316269   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:27:14.316443   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:14.316458   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:27:14.316594   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:27:14.552647   29946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:27:14.559427   29946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:27:14.559487   29946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:27:14.575890   29946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 19:27:14.575920   29946 start.go:495] detecting cgroup driver to use...
	I0919 19:27:14.575983   29946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:27:14.591936   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:27:14.606858   29946 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:27:14.606921   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:27:14.621450   29946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:27:14.635364   29946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:27:14.756131   29946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:27:14.907154   29946 docker.go:233] disabling docker service ...
	I0919 19:27:14.907243   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:27:14.923366   29946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:27:14.936588   29946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:27:15.078676   29946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:27:15.198104   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:27:15.212919   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:27:15.232314   29946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:27:15.232376   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.242884   29946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:27:15.242957   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.253165   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.263320   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.273801   29946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:27:15.284463   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.296688   29946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.314869   29946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:27:15.327156   29946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:27:15.338349   29946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 19:27:15.338412   29946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 19:27:15.353775   29946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:27:15.365059   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:27:15.499190   29946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:27:15.590064   29946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:27:15.590148   29946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:27:15.595200   29946 start.go:563] Will wait 60s for crictl version
	I0919 19:27:15.595269   29946 ssh_runner.go:195] Run: which crictl
	I0919 19:27:15.599029   29946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:27:15.640263   29946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:27:15.640356   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:27:15.670621   29946 ssh_runner.go:195] Run: crio --version
	I0919 19:27:15.702613   29946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:27:15.703947   29946 out.go:177]   - env NO_PROXY=192.168.39.173
	I0919 19:27:15.705240   29946 out.go:177]   - env NO_PROXY=192.168.39.173,192.168.39.232
	I0919 19:27:15.706651   29946 main.go:141] libmachine: (ha-076992-m03) Calling .GetIP
	I0919 19:27:15.709234   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:15.709551   29946 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:27:15.709578   29946 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:27:15.709744   29946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:27:15.714032   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:27:15.727732   29946 mustload.go:65] Loading cluster: ha-076992
	I0919 19:27:15.727996   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:27:15.728332   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:27:15.728377   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:27:15.743011   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0919 19:27:15.743384   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:27:15.743811   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:27:15.743832   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:27:15.744550   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:27:15.744751   29946 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:27:15.746453   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:27:15.746740   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:27:15.746776   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:27:15.761958   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0919 19:27:15.762454   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:27:15.762899   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:27:15.762916   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:27:15.763265   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:27:15.763475   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:27:15.763629   29946 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.66
	I0919 19:27:15.763640   29946 certs.go:194] generating shared ca certs ...
	I0919 19:27:15.763657   29946 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:27:15.763802   29946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:27:15.763861   29946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:27:15.763874   29946 certs.go:256] generating profile certs ...
	I0919 19:27:15.763968   29946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:27:15.764001   29946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430
	I0919 19:27:15.764017   29946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.232 192.168.39.66 192.168.39.254]
	I0919 19:27:15.897451   29946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430 ...
	I0919 19:27:15.897480   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430: {Name:mk8beb13cebda88770e8cb2f4d651fd5a45e954c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:27:15.897644   29946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430 ...
	I0919 19:27:15.897655   29946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430: {Name:mkcd8cc84233dc653483e6e6401ec1c9f04025cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:27:15.897721   29946 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.9a419430 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:27:15.897848   29946 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.9a419430 -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:27:15.897973   29946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:27:15.897988   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:27:15.898003   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:27:15.898016   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:27:15.898028   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:27:15.898040   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:27:15.898054   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:27:15.898066   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:27:15.913133   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:27:15.913210   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:27:15.913259   29946 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:27:15.913269   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:27:15.913290   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:27:15.913314   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:27:15.913334   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:27:15.913371   29946 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:27:15.913402   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:27:15.913413   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:15.913423   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:27:15.913453   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:27:15.916526   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:15.916928   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:27:15.916951   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:15.917154   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:27:15.917364   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:27:15.917522   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:27:15.917642   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:27:15.989416   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 19:27:15.994763   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 19:27:16.006209   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 19:27:16.010673   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 19:27:16.021439   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 19:27:16.026004   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 19:27:16.036773   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 19:27:16.041211   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 19:27:16.051440   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 19:27:16.055788   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 19:27:16.066035   29946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 19:27:16.071009   29946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 19:27:16.081291   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:27:16.106933   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:27:16.131578   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:27:16.154733   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:27:16.178142   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 19:27:16.203131   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 19:27:16.231577   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:27:16.258783   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:27:16.282643   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:27:16.307319   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:27:16.330802   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:27:16.354835   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 19:27:16.371768   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 19:27:16.387527   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 19:27:16.403635   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 19:27:16.419535   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 19:27:16.437605   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 19:27:16.453718   29946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 19:27:16.470564   29946 ssh_runner.go:195] Run: openssl version
	I0919 19:27:16.476297   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:27:16.486813   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:27:16.491276   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:27:16.491323   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:27:16.496992   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:27:16.507732   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:27:16.518539   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:16.523068   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:16.523123   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:27:16.528612   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:27:16.539667   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:27:16.550474   29946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:27:16.555341   29946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:27:16.555413   29946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:27:16.561228   29946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:27:16.572802   29946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:27:16.577025   29946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 19:27:16.577096   29946 kubeadm.go:934] updating node {m03 192.168.39.66 8443 v1.31.1 crio true true} ...
	I0919 19:27:16.577177   29946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:27:16.577201   29946 kube-vip.go:115] generating kube-vip config ...
	I0919 19:27:16.577231   29946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:27:16.595588   29946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:27:16.595653   29946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:27:16.595722   29946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:27:16.605668   29946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0919 19:27:16.605728   29946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0919 19:27:16.615281   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0919 19:27:16.615305   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:27:16.615306   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0919 19:27:16.615328   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:27:16.615349   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0919 19:27:16.615354   29946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0919 19:27:16.615388   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0919 19:27:16.615397   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:27:16.623586   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0919 19:27:16.623626   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0919 19:27:16.623772   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0919 19:27:16.623799   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0919 19:27:16.636164   29946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:27:16.636292   29946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0919 19:27:16.736519   29946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0919 19:27:16.736558   29946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0919 19:27:17.474932   29946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 19:27:17.484832   29946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 19:27:17.501777   29946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:27:17.518686   29946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:27:17.535414   29946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:27:17.539429   29946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:27:17.552345   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:27:17.687800   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:27:17.706912   29946 host.go:66] Checking if "ha-076992" exists ...
	I0919 19:27:17.707271   29946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:27:17.707332   29946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:27:17.723234   29946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46531
	I0919 19:27:17.723773   29946 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:27:17.724317   29946 main.go:141] libmachine: Using API Version  1
	I0919 19:27:17.724344   29946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:27:17.724711   29946 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:27:17.724916   29946 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:27:17.725046   29946 start.go:317] joinCluster: &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:27:17.725198   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 19:27:17.725213   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:27:17.728260   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:17.728743   29946 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:27:17.728764   29946 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:27:17.728933   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:27:17.729087   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:27:17.729233   29946 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:27:17.729362   29946 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:27:17.893938   29946 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:27:17.893987   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nfzvhu.osmpbokubpd9m5ji --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m03 --control-plane --apiserver-advertise-address=192.168.39.66 --apiserver-bind-port=8443"
	I0919 19:27:40.045829   29946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nfzvhu.osmpbokubpd9m5ji --discovery-token-ca-cert-hash sha256:7c0c74a319a48e20691242952e4affb8a8ad4800d94ea9a05ba81906251d90e5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076992-m03 --control-plane --apiserver-advertise-address=192.168.39.66 --apiserver-bind-port=8443": (22.151818373s)
	I0919 19:27:40.045864   29946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 19:27:40.606802   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076992-m03 minikube.k8s.io/updated_at=2024_09_19T19_27_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=ha-076992 minikube.k8s.io/primary=false
	I0919 19:27:40.720562   29946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-076992-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 19:27:40.852305   29946 start.go:319] duration metric: took 23.127257351s to joinCluster
	I0919 19:27:40.852371   29946 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:27:40.852725   29946 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:27:40.853772   29946 out.go:177] * Verifying Kubernetes components...
	I0919 19:27:40.855055   29946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:27:41.140593   29946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:27:41.167178   29946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:27:41.167526   29946 kapi.go:59] client config for ha-076992: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 19:27:41.167609   29946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.173:8443
	I0919 19:27:41.167883   29946 node_ready.go:35] waiting up to 6m0s for node "ha-076992-m03" to be "Ready" ...
	I0919 19:27:41.167964   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:41.167975   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:41.167986   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:41.167992   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:41.171312   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:41.668093   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:41.668122   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:41.668136   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:41.668145   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:41.671847   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:42.169049   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:42.169078   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:42.169089   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:42.169097   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:42.173253   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:42.668124   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:42.668154   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:42.668165   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:42.668172   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:42.671705   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:43.169071   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:43.169099   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:43.169111   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:43.169119   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:43.172988   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:43.173723   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:43.668069   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:43.668090   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:43.668098   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:43.668102   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:43.671379   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:44.168189   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:44.168213   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:44.168224   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:44.168232   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:44.172163   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:44.668238   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:44.668263   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:44.668292   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:44.668300   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:44.672297   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:45.168809   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:45.168914   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:45.168943   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:45.168952   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:45.172818   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:45.668795   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:45.668819   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:45.668829   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:45.668833   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:45.672833   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:45.673726   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:46.168145   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:46.168176   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:46.168188   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:46.168195   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:46.171541   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:46.669018   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:46.669043   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:46.669053   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:46.669058   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:46.672077   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:47.168070   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:47.168095   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:47.168106   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:47.168112   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:47.171091   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:47.668131   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:47.668156   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:47.668167   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:47.668173   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:47.671585   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:48.168035   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:48.168054   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:48.168066   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:48.168071   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:48.172365   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:48.172854   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:48.668232   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:48.668261   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:48.668269   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:48.668273   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:48.671672   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:49.168763   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:49.168784   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:49.168792   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:49.168796   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:49.172225   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:49.668291   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:49.668312   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:49.668319   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:49.668323   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:49.671622   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:50.168990   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:50.169014   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:50.169023   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:50.169028   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:50.172111   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:50.668480   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:50.668500   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:50.668508   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:50.668514   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:50.672693   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:50.673442   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:51.168845   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:51.168870   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:51.168883   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:51.168896   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:51.172225   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:51.668471   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:51.668494   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:51.668505   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:51.668510   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:51.672549   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:27:52.168467   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:52.168490   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:52.168499   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:52.168502   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:52.172284   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:52.668300   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:52.668325   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:52.668337   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:52.668345   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:52.671626   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:53.168043   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:53.168066   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:53.168076   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:53.168082   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:53.171507   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:53.172186   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:53.668508   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:53.668530   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:53.668539   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:53.668544   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:53.674065   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:54.169042   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:54.169081   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:54.169093   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:54.169101   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:54.172484   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:54.668693   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:54.668716   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:54.668724   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:54.668728   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:54.671712   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:55.168811   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:55.168838   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:55.168850   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:55.168856   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:55.171986   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:55.172564   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:55.669027   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:55.669049   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:55.669060   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:55.669116   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:55.674537   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:56.168644   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:56.168667   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:56.168674   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:56.168677   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:56.172061   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:56.669121   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:56.669152   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:56.669164   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:56.669170   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:56.672708   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:57.168818   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:57.168844   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:57.168856   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:57.168865   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:57.172258   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:57.172846   29946 node_ready.go:53] node "ha-076992-m03" has status "Ready":"False"
	I0919 19:27:57.668135   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:57.668158   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:57.668169   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:57.668174   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:57.671424   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:58.168923   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:58.168945   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:58.168953   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:58.168956   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:58.172623   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:58.668685   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:58.668705   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:58.668713   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:58.668717   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:58.671912   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.168858   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:59.168880   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.168889   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.168892   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.171841   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.172469   29946 node_ready.go:49] node "ha-076992-m03" has status "Ready":"True"
	I0919 19:27:59.172488   29946 node_ready.go:38] duration metric: took 18.004586894s for node "ha-076992-m03" to be "Ready" ...
	I0919 19:27:59.172499   29946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:27:59.172582   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:27:59.172595   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.172604   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.172609   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.178464   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:59.185406   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.185497   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bst8x
	I0919 19:27:59.185507   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.185518   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.185526   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.188442   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.189103   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.189120   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.189130   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.189136   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.191329   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.191851   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.191866   29946 pod_ready.go:82] duration metric: took 6.439364ms for pod "coredns-7c65d6cfc9-bst8x" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.191873   29946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.191928   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nbds4
	I0919 19:27:59.191937   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.191944   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.191948   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.194394   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.195009   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.195025   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.195031   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.195035   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.197517   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.198256   29946 pod_ready.go:93] pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.198270   29946 pod_ready.go:82] duration metric: took 6.390833ms for pod "coredns-7c65d6cfc9-nbds4" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.198278   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.198317   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992
	I0919 19:27:59.198324   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.198331   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.198336   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.200499   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.201171   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.201184   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.201190   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.201201   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.203402   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.203953   29946 pod_ready.go:93] pod "etcd-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.203973   29946 pod_ready.go:82] duration metric: took 5.68948ms for pod "etcd-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.203984   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.204042   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m02
	I0919 19:27:59.204053   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.204062   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.204073   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.206409   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.207206   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:27:59.207225   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.207234   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.207242   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.209682   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:27:59.210215   29946 pod_ready.go:93] pod "etcd-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.210231   29946 pod_ready.go:82] duration metric: took 6.235966ms for pod "etcd-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.210241   29946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.369687   29946 request.go:632] Waited for 159.345593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m03
	I0919 19:27:59.369758   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076992-m03
	I0919 19:27:59.369768   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.369776   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.369782   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.373326   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.569343   29946 request.go:632] Waited for 195.374141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:59.569427   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:27:59.569435   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.569444   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.569454   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.572773   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.573760   29946 pod_ready.go:93] pod "etcd-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.573784   29946 pod_ready.go:82] duration metric: took 363.534844ms for pod "etcd-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.573804   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.769848   29946 request.go:632] Waited for 195.964398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:27:59.769916   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992
	I0919 19:27:59.769924   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.769941   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.769951   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.773613   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:27:59.969692   29946 request.go:632] Waited for 195.271169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.969763   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:27:59.969771   29946 round_trippers.go:469] Request Headers:
	I0919 19:27:59.969782   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:27:59.969790   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:27:59.975454   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:27:59.976399   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:27:59.976419   29946 pod_ready.go:82] duration metric: took 402.608428ms for pod "kube-apiserver-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:27:59.976442   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.169862   29946 request.go:632] Waited for 193.313777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:28:00.169932   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m02
	I0919 19:28:00.169948   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.169963   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.169971   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.173456   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.369679   29946 request.go:632] Waited for 195.364808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:00.369746   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:00.369757   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.369769   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.369777   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.373078   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.373725   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:00.373745   29946 pod_ready.go:82] duration metric: took 397.293364ms for pod "kube-apiserver-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.373754   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.569238   29946 request.go:632] Waited for 195.416262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m03
	I0919 19:28:00.569304   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076992-m03
	I0919 19:28:00.569310   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.569317   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.569325   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.572712   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.769839   29946 request.go:632] Waited for 196.213847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:00.769902   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:00.769909   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.769916   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.769925   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.773054   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:00.773595   29946 pod_ready.go:93] pod "kube-apiserver-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:00.773611   29946 pod_ready.go:82] duration metric: took 399.848276ms for pod "kube-apiserver-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.773623   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:00.969813   29946 request.go:632] Waited for 196.102797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:28:00.969866   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992
	I0919 19:28:00.969871   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:00.969878   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:00.969883   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:00.978905   29946 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0919 19:28:01.169966   29946 request.go:632] Waited for 190.375143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:01.170066   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:01.170080   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.170090   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.170095   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.173733   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.174395   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:01.174419   29946 pod_ready.go:82] duration metric: took 400.786244ms for pod "kube-controller-manager-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.174431   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.369465   29946 request.go:632] Waited for 194.942354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:28:01.369536   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m02
	I0919 19:28:01.369546   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.369559   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.369570   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.373178   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.569830   29946 request.go:632] Waited for 195.884004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:01.569887   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:01.569894   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.569906   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.569911   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.573021   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.573575   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:01.573597   29946 pod_ready.go:82] duration metric: took 399.158503ms for pod "kube-controller-manager-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.573610   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.769720   29946 request.go:632] Waited for 196.039819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m03
	I0919 19:28:01.769796   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076992-m03
	I0919 19:28:01.769804   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.769815   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.769863   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.773496   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.969679   29946 request.go:632] Waited for 195.366002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:01.969751   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:01.969759   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:01.969770   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:01.969778   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:01.973411   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:01.973966   29946 pod_ready.go:93] pod "kube-controller-manager-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:01.973986   29946 pod_ready.go:82] duration metric: took 400.368344ms for pod "kube-controller-manager-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:01.973999   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.169159   29946 request.go:632] Waited for 195.067817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:28:02.169233   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4d8dc
	I0919 19:28:02.169240   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.169249   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.169255   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.172645   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:02.369743   29946 request.go:632] Waited for 196.39611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:02.369834   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:02.369848   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.369859   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.369869   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.372902   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:02.373658   29946 pod_ready.go:93] pod "kube-proxy-4d8dc" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:02.373679   29946 pod_ready.go:82] duration metric: took 399.671379ms for pod "kube-proxy-4d8dc" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.373695   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qxzr" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.569759   29946 request.go:632] Waited for 195.99907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qxzr
	I0919 19:28:02.569828   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qxzr
	I0919 19:28:02.569835   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.569845   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.569850   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.573245   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:02.769286   29946 request.go:632] Waited for 195.311639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:02.769401   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:02.769411   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.769421   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.769429   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.774902   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:28:02.775546   29946 pod_ready.go:93] pod "kube-proxy-4qxzr" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:02.775569   29946 pod_ready.go:82] duration metric: took 401.866343ms for pod "kube-proxy-4qxzr" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.775582   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:02.969688   29946 request.go:632] Waited for 194.028715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:28:02.969782   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjtfj
	I0919 19:28:02.969793   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:02.969804   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:02.969814   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:02.973511   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.169667   29946 request.go:632] Waited for 195.362144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.169732   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.169740   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.169750   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.169759   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.173206   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.173751   29946 pod_ready.go:93] pod "kube-proxy-tjtfj" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:03.173769   29946 pod_ready.go:82] duration metric: took 398.180461ms for pod "kube-proxy-tjtfj" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.173777   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.369899   29946 request.go:632] Waited for 196.051119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:28:03.370000   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992
	I0919 19:28:03.370008   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.370019   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.370028   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.373045   29946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:28:03.569018   29946 request.go:632] Waited for 195.269584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:03.569098   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992
	I0919 19:28:03.569104   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.569111   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.569117   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.572980   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.573818   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:03.573842   29946 pod_ready.go:82] duration metric: took 400.056994ms for pod "kube-scheduler-ha-076992" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.573856   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.768884   29946 request.go:632] Waited for 194.957925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:28:03.768975   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m02
	I0919 19:28:03.768982   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.768989   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.768994   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.772280   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.969113   29946 request.go:632] Waited for 196.276201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.969173   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m02
	I0919 19:28:03.969181   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:03.969192   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:03.969201   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:03.972689   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:03.973513   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:03.973536   29946 pod_ready.go:82] duration metric: took 399.670878ms for pod "kube-scheduler-ha-076992-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:03.973550   29946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:04.169664   29946 request.go:632] Waited for 196.044338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m03
	I0919 19:28:04.169768   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076992-m03
	I0919 19:28:04.169779   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.169790   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.169795   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.173604   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:04.369491   29946 request.go:632] Waited for 195.428121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:04.369586   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes/ha-076992-m03
	I0919 19:28:04.369594   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.369605   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.369611   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.373358   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:04.373807   29946 pod_ready.go:93] pod "kube-scheduler-ha-076992-m03" in "kube-system" namespace has status "Ready":"True"
	I0919 19:28:04.373827   29946 pod_ready.go:82] duration metric: took 400.269116ms for pod "kube-scheduler-ha-076992-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:28:04.373841   29946 pod_ready.go:39] duration metric: took 5.201326396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:28:04.373868   29946 api_server.go:52] waiting for apiserver process to appear ...
	I0919 19:28:04.373935   29946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:28:04.390528   29946 api_server.go:72] duration metric: took 23.538119441s to wait for apiserver process to appear ...
	I0919 19:28:04.390551   29946 api_server.go:88] waiting for apiserver healthz status ...
	I0919 19:28:04.390571   29946 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0919 19:28:04.396791   29946 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0919 19:28:04.396862   29946 round_trippers.go:463] GET https://192.168.39.173:8443/version
	I0919 19:28:04.396873   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.396882   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.396889   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.397946   29946 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 19:28:04.398142   29946 api_server.go:141] control plane version: v1.31.1
	I0919 19:28:04.398162   29946 api_server.go:131] duration metric: took 7.603365ms to wait for apiserver health ...
	I0919 19:28:04.398171   29946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 19:28:04.569591   29946 request.go:632] Waited for 171.340636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.569649   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.569654   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.569661   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.569665   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.575663   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:28:04.582592   29946 system_pods.go:59] 24 kube-system pods found
	I0919 19:28:04.582629   29946 system_pods.go:61] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:28:04.582636   29946 system_pods.go:61] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:28:04.582641   29946 system_pods.go:61] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:28:04.582646   29946 system_pods.go:61] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:28:04.582651   29946 system_pods.go:61] "etcd-ha-076992-m03" [2cb8094f-2857-49e8-a740-58c09de52bb5] Running
	I0919 19:28:04.582656   29946 system_pods.go:61] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:28:04.582660   29946 system_pods.go:61] "kindnet-89gmh" [696397d5-76c4-4565-9baa-042392bc74c8] Running
	I0919 19:28:04.582665   29946 system_pods.go:61] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:28:04.582670   29946 system_pods.go:61] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:28:04.582674   29946 system_pods.go:61] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:28:04.582679   29946 system_pods.go:61] "kube-apiserver-ha-076992-m03" [7ada8b62-958d-4bbf-9b60-4f2f8738e864] Running
	I0919 19:28:04.582685   29946 system_pods.go:61] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:28:04.582696   29946 system_pods.go:61] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:28:04.582705   29946 system_pods.go:61] "kube-controller-manager-ha-076992-m03" [b12ed136-a047-45cc-966f-fdbb624ee027] Running
	I0919 19:28:04.582710   29946 system_pods.go:61] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:28:04.582715   29946 system_pods.go:61] "kube-proxy-4qxzr" [91b8da75-fb68-4cfe-b463-5f4ce57a9fbc] Running
	I0919 19:28:04.582719   29946 system_pods.go:61] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:28:04.582722   29946 system_pods.go:61] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:28:04.582725   29946 system_pods.go:61] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:28:04.582729   29946 system_pods.go:61] "kube-scheduler-ha-076992-m03" [7b69ed21-49ee-47d0-add2-83b93f61b3cf] Running
	I0919 19:28:04.582732   29946 system_pods.go:61] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:28:04.582735   29946 system_pods.go:61] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:28:04.582738   29946 system_pods.go:61] "kube-vip-ha-076992-m03" [8e4ad9ad-38d3-4189-8ea9-16a7e8f87f08] Running
	I0919 19:28:04.582741   29946 system_pods.go:61] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:28:04.582746   29946 system_pods.go:74] duration metric: took 184.569532ms to wait for pod list to return data ...
	I0919 19:28:04.582762   29946 default_sa.go:34] waiting for default service account to be created ...
	I0919 19:28:04.769178   29946 request.go:632] Waited for 186.318811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:28:04.769251   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:28:04.769259   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.769269   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.769302   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.773568   29946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:28:04.773707   29946 default_sa.go:45] found service account: "default"
	I0919 19:28:04.773726   29946 default_sa.go:55] duration metric: took 190.956992ms for default service account to be created ...
	I0919 19:28:04.773736   29946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 19:28:04.968965   29946 request.go:632] Waited for 195.155154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.969039   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/namespaces/kube-system/pods
	I0919 19:28:04.969056   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:04.969099   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:04.969108   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:04.974937   29946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:28:04.983584   29946 system_pods.go:86] 24 kube-system pods found
	I0919 19:28:04.983617   29946 system_pods.go:89] "coredns-7c65d6cfc9-bst8x" [165f4eae-fc28-4b50-b35f-f61f95d9872a] Running
	I0919 19:28:04.983625   29946 system_pods.go:89] "coredns-7c65d6cfc9-nbds4" [89ceb0f8-a15c-405e-b0ed-d54a8bfe332f] Running
	I0919 19:28:04.983629   29946 system_pods.go:89] "etcd-ha-076992" [a36c9719-58c8-4483-a916-29a9d0dd5613] Running
	I0919 19:28:04.983633   29946 system_pods.go:89] "etcd-ha-076992-m02" [07b412db-5357-435d-aa00-cd43f5a73f63] Running
	I0919 19:28:04.983637   29946 system_pods.go:89] "etcd-ha-076992-m03" [2cb8094f-2857-49e8-a740-58c09de52bb5] Running
	I0919 19:28:04.983641   29946 system_pods.go:89] "kindnet-6d8pz" [b38eb07f-478f-4299-995c-501a18aa5fe1] Running
	I0919 19:28:04.983645   29946 system_pods.go:89] "kindnet-89gmh" [696397d5-76c4-4565-9baa-042392bc74c8] Running
	I0919 19:28:04.983648   29946 system_pods.go:89] "kindnet-j846w" [cdccd08d-8a5d-4495-8ad3-5591de87862f] Running
	I0919 19:28:04.983652   29946 system_pods.go:89] "kube-apiserver-ha-076992" [1fa836fb-0fd7-4c80-acfa-fb0cf24c252a] Running
	I0919 19:28:04.983656   29946 system_pods.go:89] "kube-apiserver-ha-076992-m02" [af4ed3e9-f6a3-455c-a72e-c28233f93113] Running
	I0919 19:28:04.983659   29946 system_pods.go:89] "kube-apiserver-ha-076992-m03" [7ada8b62-958d-4bbf-9b60-4f2f8738e864] Running
	I0919 19:28:04.983663   29946 system_pods.go:89] "kube-controller-manager-ha-076992" [dd13afbd-7e6f-49fa-bab4-20998b968f98] Running
	I0919 19:28:04.983667   29946 system_pods.go:89] "kube-controller-manager-ha-076992-m02" [01a73ea5-ba7b-4a8a-bbb2-fc8dd0cd06ad] Running
	I0919 19:28:04.983670   29946 system_pods.go:89] "kube-controller-manager-ha-076992-m03" [b12ed136-a047-45cc-966f-fdbb624ee027] Running
	I0919 19:28:04.983674   29946 system_pods.go:89] "kube-proxy-4d8dc" [4d522b18-9ae7-46a9-a6c7-e1560a1822de] Running
	I0919 19:28:04.983677   29946 system_pods.go:89] "kube-proxy-4qxzr" [91b8da75-fb68-4cfe-b463-5f4ce57a9fbc] Running
	I0919 19:28:04.983680   29946 system_pods.go:89] "kube-proxy-tjtfj" [e46462e0-0c51-4ae5-924a-c0cf6029f102] Running
	I0919 19:28:04.983683   29946 system_pods.go:89] "kube-scheduler-ha-076992" [1533c118-c7d1-4a87-98d6-899acaa868d6] Running
	I0919 19:28:04.983687   29946 system_pods.go:89] "kube-scheduler-ha-076992-m02" [878ec001-2974-4ef4-8a15-c87f69f285aa] Running
	I0919 19:28:04.983691   29946 system_pods.go:89] "kube-scheduler-ha-076992-m03" [7b69ed21-49ee-47d0-add2-83b93f61b3cf] Running
	I0919 19:28:04.983694   29946 system_pods.go:89] "kube-vip-ha-076992" [28d46155-5352-4ab1-9480-9e5e3a5cbb28] Running
	I0919 19:28:04.983697   29946 system_pods.go:89] "kube-vip-ha-076992-m02" [ea560e15-8e24-4c5e-8525-88c4f021cbff] Running
	I0919 19:28:04.983708   29946 system_pods.go:89] "kube-vip-ha-076992-m03" [8e4ad9ad-38d3-4189-8ea9-16a7e8f87f08] Running
	I0919 19:28:04.983714   29946 system_pods.go:89] "storage-provisioner" [7964879c-5097-490e-b1ba-dd41091ca283] Running
	I0919 19:28:04.983719   29946 system_pods.go:126] duration metric: took 209.976345ms to wait for k8s-apps to be running ...
	I0919 19:28:04.983728   29946 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 19:28:04.983768   29946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:28:05.000249   29946 system_svc.go:56] duration metric: took 16.508734ms WaitForService to wait for kubelet
	I0919 19:28:05.000280   29946 kubeadm.go:582] duration metric: took 24.147874151s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:28:05.000306   29946 node_conditions.go:102] verifying NodePressure condition ...
	I0919 19:28:05.168981   29946 request.go:632] Waited for 168.596869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.173:8443/api/v1/nodes
	I0919 19:28:05.169036   29946 round_trippers.go:463] GET https://192.168.39.173:8443/api/v1/nodes
	I0919 19:28:05.169043   29946 round_trippers.go:469] Request Headers:
	I0919 19:28:05.169052   29946 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:28:05.169059   29946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 19:28:05.172968   29946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:28:05.174140   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:28:05.174163   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:28:05.174173   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:28:05.174177   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:28:05.174180   29946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 19:28:05.174183   29946 node_conditions.go:123] node cpu capacity is 2
	I0919 19:28:05.174187   29946 node_conditions.go:105] duration metric: took 173.877315ms to run NodePressure ...
	I0919 19:28:05.174197   29946 start.go:241] waiting for startup goroutines ...
	I0919 19:28:05.174217   29946 start.go:255] writing updated cluster config ...
	I0919 19:28:05.174491   29946 ssh_runner.go:195] Run: rm -f paused
	I0919 19:28:05.224162   29946 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 19:28:05.226313   29946 out.go:177] * Done! kubectl is now configured to use "ha-076992" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.391600211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774316391570208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=911a9ba1-9ce8-48c6-9af4-d22cf3f1fc1d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.393058845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b91394e-8dab-4f65-8f0e-b979eb11c502 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.393125859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b91394e-8dab-4f65-8f0e-b979eb11c502 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.393402373Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b91394e-8dab-4f65-8f0e-b979eb11c502 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.431832958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd714654-f6c7-45ff-a304-cd162b2877f9 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.431912486Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd714654-f6c7-45ff-a304-cd162b2877f9 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.433903810Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83c23327-e054-4542-bf86-e3523a4e7305 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.434391608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774316434366861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83c23327-e054-4542-bf86-e3523a4e7305 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.434968238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0de7478a-5aea-48f3-89d5-6b580479db60 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.435090692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0de7478a-5aea-48f3-89d5-6b580479db60 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.435357674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0de7478a-5aea-48f3-89d5-6b580479db60 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.476739461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0fe9602-697c-4e44-bbe8-2083d401e947 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.476831087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0fe9602-697c-4e44-bbe8-2083d401e947 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.478328787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7daa4e74-eb37-4023-883d-30e9bc0b1226 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.478773844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774316478750866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7daa4e74-eb37-4023-883d-30e9bc0b1226 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.479457667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f97a1f7d-ceae-4bea-934b-f478d52ee339 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.479516216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f97a1f7d-ceae-4bea-934b-f478d52ee339 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.479756300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f97a1f7d-ceae-4bea-934b-f478d52ee339 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.518949593Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aca4aca7-6289-49bb-b28d-7b40d1dc0428 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.519089523Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aca4aca7-6289-49bb-b28d-7b40d1dc0428 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.520297313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5f77032-f8bd-4b64-bf0e-59256fbd4cb1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.520825968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774316520799140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5f77032-f8bd-4b64-bf0e-59256fbd4cb1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.521580803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df893e4b-4d80-49a9-8847-a5e9760c66a1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.521664561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df893e4b-4d80-49a9-8847-a5e9760c66a1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:31:56 ha-076992 crio[661]: time="2024-09-19 19:31:56.521934848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774089735237911,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950241242996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726773950179487713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856,PodSandboxId:5d96139db90a869185766b4a95cc660c067d57ed861dcf3c89bfeb58312e7665,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726773950134252886,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17267739
37821721913,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726773937599648822,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389,PodSandboxId:9f7ef19609750c2f270d503ca524fb10d3e6bdd92d2cdd62c9d0a41ea35f79ea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726773928437470403,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d13805d19ec913a3d0f90382069839b,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726773925364535119,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501,PodSandboxId:9cebb02c5eed594580aac2b2bebff36495a751b306f64293a7810adb08895f9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726773925319552747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726773925242815006,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b,PodSandboxId:6a8db8524df215a659d8b7a716d41518cfa9769a492e4cfdb8c016f18e7845b6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726773925210548493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df893e4b-4d80-49a9-8847-a5e9760c66a1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52db63dad4c31       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a8aaf854df641       busybox-7dff88458-8wfb7
	17ef846dadbee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   8583d1eda759f       coredns-7c65d6cfc9-nbds4
	cbaa19f6b3857       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   d65bb54e4c426       coredns-7c65d6cfc9-bst8x
	6eb7d57489862       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   5d96139db90a8       storage-provisioner
	d623b5f012d8a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   0273544afdfa6       kindnet-j846w
	9d62ecb2cc70a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   2a6c6ac66a434       kube-proxy-4d8dc
	3132b4bb29e16       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   9f7ef19609750       kube-vip-ha-076992
	5745c8d186325       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   09b02f34308ad       kube-scheduler-ha-076992
	f7da5064b19f5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   9cebb02c5eed5       kube-apiserver-ha-076992
	3beffc038ef33       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   fc5737a4c0f5c       etcd-ha-076992
	5b605d500b3ee       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   6a8db8524df21       kube-controller-manager-ha-076992
	
	
	==> coredns [17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3] <==
	[INFO] 10.244.0.4:34108 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.006817779s
	[INFO] 10.244.0.4:40322 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013826742s
	[INFO] 10.244.1.2:55399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000298188s
	[INFO] 10.244.1.2:35261 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000170423s
	[INFO] 10.244.2.2:57349 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000113863s
	[INFO] 10.244.2.2:35304 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000093782s
	[INFO] 10.244.0.4:60710 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175542s
	[INFO] 10.244.0.4:56638 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002407779s
	[INFO] 10.244.1.2:60721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148724s
	[INFO] 10.244.2.2:40070 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138971s
	[INFO] 10.244.2.2:53394 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186542s
	[INFO] 10.244.2.2:54178 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000225634s
	[INFO] 10.244.2.2:53480 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001438271s
	[INFO] 10.244.2.2:48475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168626s
	[INFO] 10.244.2.2:49380 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160453s
	[INFO] 10.244.2.2:38326 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100289s
	[INFO] 10.244.1.2:47564 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107018s
	[INFO] 10.244.0.4:55521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119496s
	[INFO] 10.244.0.4:51830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118694s
	[INFO] 10.244.0.4:49301 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000181413s
	[INFO] 10.244.1.2:38961 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124955s
	[INFO] 10.244.1.2:37060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092863s
	[INFO] 10.244.1.2:44024 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085892s
	[INFO] 10.244.2.2:35688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014156s
	[INFO] 10.244.2.2:33974 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170311s
	
	
	==> coredns [cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0] <==
	[INFO] 10.244.0.4:45775 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000206662s
	[INFO] 10.244.0.4:34019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123934s
	[INFO] 10.244.1.2:60797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218519s
	[INFO] 10.244.1.2:44944 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001794304s
	[INFO] 10.244.1.2:51111 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185225s
	[INFO] 10.244.1.2:46956 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160685s
	[INFO] 10.244.1.2:36318 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001321241s
	[INFO] 10.244.1.2:53158 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118134s
	[INFO] 10.244.1.2:45995 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102925s
	[INFO] 10.244.2.2:55599 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001757807s
	[INFO] 10.244.0.4:50520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118756s
	[INFO] 10.244.0.4:48294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189838s
	[INFO] 10.244.0.4:52710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005729s
	[INFO] 10.244.0.4:56525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085763s
	[INFO] 10.244.1.2:43917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168832s
	[INFO] 10.244.1.2:34972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200932s
	[INFO] 10.244.1.2:50680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181389s
	[INFO] 10.244.2.2:51430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152587s
	[INFO] 10.244.2.2:37924 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000317695s
	[INFO] 10.244.2.2:46377 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000371446s
	[INFO] 10.244.2.2:36790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012815s
	[INFO] 10.244.0.4:35196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000409388s
	[INFO] 10.244.1.2:43265 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235404s
	[INFO] 10.244.2.2:56515 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113892s
	[INFO] 10.244.2.2:33574 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251263s
	
	
	==> describe nodes <==
	Name:               ha-076992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T19_25_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:31:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:28:35 +0000   Thu, 19 Sep 2024 19:25:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    ha-076992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 88962b0779f84ff6915974a39d1a24ba
	  System UUID:                88962b07-79f8-4ff6-9159-74a39d1a24ba
	  Boot ID:                    f4736dd6-fd6e-4dc3-b2ee-64f8773325ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8wfb7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 coredns-7c65d6cfc9-bst8x             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m20s
	  kube-system                 coredns-7c65d6cfc9-nbds4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m20s
	  kube-system                 etcd-ha-076992                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m25s
	  kube-system                 kindnet-j846w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-076992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-controller-manager-ha-076992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-proxy-4d8dc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-076992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-vip-ha-076992                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m18s  kube-proxy       
	  Normal  Starting                 6m25s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m25s  kubelet          Node ha-076992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s  kubelet          Node ha-076992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s  kubelet          Node ha-076992 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m22s  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal  NodeReady                6m7s   kubelet          Node ha-076992 status is now: NodeReady
	  Normal  RegisteredNode           5m24s  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal  RegisteredNode           4m11s  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	
	
	Name:               ha-076992-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_26_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:26:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:29:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 19 Sep 2024 19:28:27 +0000   Thu, 19 Sep 2024 19:30:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-076992-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fbb92a6f6fa49d49b42ed70b015086d
	  System UUID:                7fbb92a6-f6fa-49d4-9b42-ed70b015086d
	  Boot ID:                    d99d8bb8-fed0-4ef9-95a0-7b5cb6b4a8e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c64rv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-076992-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m30s
	  kube-system                 kindnet-6d8pz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m32s
	  kube-system                 kube-apiserver-ha-076992-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-076992-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-tjtfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-scheduler-ha-076992-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-vip-ha-076992-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m32s (x8 over 5m32s)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m32s (x8 over 5m32s)  kubelet          Node ha-076992-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m32s (x7 over 5m32s)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  NodeNotReady             116s                   node-controller  Node ha-076992-m02 status is now: NodeNotReady
	
	
	Name:               ha-076992-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_27_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:27:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:31:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:28:38 +0000   Thu, 19 Sep 2024 19:27:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.66
	  Hostname:    ha-076992-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0db72b5d16d8492b8f2f42e6cedd7691
	  System UUID:                0db72b5d-16d8-492b-8f2f-42e6cedd7691
	  Boot ID:                    a11e77a1-44c6-47d3-9894-1e2db25df61f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jl6lr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-076992-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kindnet-89gmh                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m19s
	  kube-system                 kube-apiserver-ha-076992-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-076992-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-proxy-4qxzr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-scheduler-ha-076992-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-vip-ha-076992-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node ha-076992-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node ha-076992-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node ha-076992-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	
	
	Name:               ha-076992-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_28_43_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:28:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:31:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:28:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:28:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:28:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:29:13 +0000   Thu, 19 Sep 2024 19:29:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-076992-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37704cd295b34d23a0864637f4482597
	  System UUID:                37704cd2-95b3-4d23-a086-4637f4482597
	  Boot ID:                    7afcea43-e30f-4573-9142-69832448eb86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8jqvd       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-8gt7w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m14s)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m14s)  kubelet          Node ha-076992-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m14s)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-076992-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep19 19:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050539] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040218] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779433] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep19 19:25] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.560626] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.418534] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.061113] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050106] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.181483] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.133235] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.281192] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.948588] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.762419] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.059014] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.974334] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.083682] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344336] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.503085] kauditd_printk_skb: 38 callbacks suppressed
	[Sep19 19:26] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018] <==
	{"level":"warn","ts":"2024-09-19T19:31:56.581140Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.783816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.790762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.791673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.796209Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.807574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.814778Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.824227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.828271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.831478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.858399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.914548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.921109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.927650Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.930877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.935392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.942964Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.950387Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.958552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.958723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.961956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.965222Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.969419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.974957Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:31:56.980262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:31:57 up 7 min,  0 users,  load average: 0.18, 0.20, 0.10
	Linux ha-076992 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4] <==
	I0919 19:31:19.301654       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:31:29.299423       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:31:29.299534       1 main.go:299] handling current node
	I0919 19:31:29.299588       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:31:29.299608       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:31:29.299733       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:31:29.299753       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:31:29.299816       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:31:29.299834       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:31:39.295069       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:31:39.295797       1 main.go:299] handling current node
	I0919 19:31:39.295864       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:31:39.295880       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:31:39.296147       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:31:39.296174       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:31:39.296250       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:31:39.296272       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:31:49.295036       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:31:49.295191       1 main.go:299] handling current node
	I0919 19:31:49.295208       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:31:49.295213       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:31:49.295337       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:31:49.295366       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:31:49.295432       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:31:49.295459       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501] <==
	I0919 19:25:31.486188       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 19:25:31.506649       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 19:25:35.598891       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0919 19:25:35.750237       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0919 19:27:38.100207       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 13.658µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 19:27:38.100632       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 19:27:38.102611       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 19:27:38.103892       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 19:27:38.105160       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.382601ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0919 19:28:11.389256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45218: use of closed network connection
	E0919 19:28:11.576268       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45246: use of closed network connection
	E0919 19:28:11.773899       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45258: use of closed network connection
	E0919 19:28:11.977200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45272: use of closed network connection
	E0919 19:28:12.158836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45298: use of closed network connection
	E0919 19:28:12.343311       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45308: use of closed network connection
	E0919 19:28:12.533653       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45320: use of closed network connection
	E0919 19:28:12.708696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45336: use of closed network connection
	E0919 19:28:12.880339       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45348: use of closed network connection
	E0919 19:28:13.172557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45378: use of closed network connection
	E0919 19:28:13.360524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45402: use of closed network connection
	E0919 19:28:13.537403       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45414: use of closed network connection
	E0919 19:28:13.726245       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45428: use of closed network connection
	E0919 19:28:13.903745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45458: use of closed network connection
	E0919 19:28:14.076234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45480: use of closed network connection
	W0919 19:29:39.951311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.173 192.168.39.66]
	
	
	==> kube-controller-manager [5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b] <==
	I0919 19:28:42.651135       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-076992-m04\" does not exist"
	I0919 19:28:42.696072       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-076992-m04" podCIDRs=["10.244.3.0/24"]
	I0919 19:28:42.696237       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:42.696385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:42.984651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:43.058418       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:43.437129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:44.991734       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-076992-m04"
	I0919 19:28:44.991858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:45.053922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:45.913734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:45.955524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:28:52.981964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:03.869117       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076992-m04"
	I0919 19:29:03.870215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:03.885512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:05.009111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:29:13.638377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:30:00.034775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:30:00.035207       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076992-m04"
	I0919 19:30:00.059561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:30:00.073804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.744937ms"
	I0919 19:30:00.073933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.501µs"
	I0919 19:30:00.989765       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:30:05.283636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	
	
	==> kube-proxy [9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 19:25:37.903821       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 19:25:37.932314       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.173"]
	E0919 19:25:37.932452       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:25:37.975043       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 19:25:37.975079       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 19:25:37.975107       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:25:37.978675       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:25:37.979280       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:25:37.979417       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:25:37.981041       1 config.go:199] "Starting service config controller"
	I0919 19:25:37.981519       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:25:37.981599       1 config.go:328] "Starting node config controller"
	I0919 19:25:37.981623       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:25:37.982405       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:25:37.982433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:25:38.081647       1 shared_informer.go:320] Caches are synced for service config
	I0919 19:25:38.081721       1 shared_informer.go:320] Caches are synced for node config
	I0919 19:25:38.082821       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1] <==
	W0919 19:25:29.292699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 19:25:29.292789       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.292883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 19:25:29.292917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.315628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 19:25:29.315915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.317062       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 19:25:29.317708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.375676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 19:25:29.375771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.399790       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 19:25:29.399959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.458469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 19:25:29.458568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 19:25:29.500384       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 19:25:29.500442       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0919 19:25:32.657764       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 19:28:06.097590       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jl6lr\": pod busybox-7dff88458-jl6lr is already assigned to node \"ha-076992-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jl6lr" node="ha-076992-m03"
	E0919 19:28:06.098198       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3f7ee95d-11f9-4073-8fa9-d4aa5fc08d99(default/busybox-7dff88458-jl6lr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jl6lr"
	E0919 19:28:06.098359       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jl6lr\": pod busybox-7dff88458-jl6lr is already assigned to node \"ha-076992-m03\"" pod="default/busybox-7dff88458-jl6lr"
	I0919 19:28:06.098540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jl6lr" node="ha-076992-m03"
	E0919 19:28:06.176510       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8wfb7\": pod busybox-7dff88458-8wfb7 is already assigned to node \"ha-076992\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8wfb7" node="ha-076992"
	E0919 19:28:06.176725       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e9e5cd58-874f-41c6-8c0a-d37b5101a1f9(default/busybox-7dff88458-8wfb7) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8wfb7"
	E0919 19:28:06.181327       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8wfb7\": pod busybox-7dff88458-8wfb7 is already assigned to node \"ha-076992\"" pod="default/busybox-7dff88458-8wfb7"
	I0919 19:28:06.181857       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8wfb7" node="ha-076992"
	
	
	==> kubelet <==
	Sep 19 19:30:31 ha-076992 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 19:30:31 ha-076992 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 19:30:31 ha-076992 kubelet[1304]: E0919 19:30:31.509860    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774231509247618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:31 ha-076992 kubelet[1304]: E0919 19:30:31.509926    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774231509247618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:41 ha-076992 kubelet[1304]: E0919 19:30:41.515125    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774241513934130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:41 ha-076992 kubelet[1304]: E0919 19:30:41.515489    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774241513934130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:51 ha-076992 kubelet[1304]: E0919 19:30:51.516656    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774251516247410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:30:51 ha-076992 kubelet[1304]: E0919 19:30:51.516759    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774251516247410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:01 ha-076992 kubelet[1304]: E0919 19:31:01.520748    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774261520199169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:01 ha-076992 kubelet[1304]: E0919 19:31:01.520803    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774261520199169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:11 ha-076992 kubelet[1304]: E0919 19:31:11.523342    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774271522952876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:11 ha-076992 kubelet[1304]: E0919 19:31:11.523611    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774271522952876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:21 ha-076992 kubelet[1304]: E0919 19:31:21.527464    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774281526662586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:21 ha-076992 kubelet[1304]: E0919 19:31:21.527558    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774281526662586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:31 ha-076992 kubelet[1304]: E0919 19:31:31.406408    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 19 19:31:31 ha-076992 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 19:31:31 ha-076992 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 19:31:31 ha-076992 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 19:31:31 ha-076992 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 19:31:31 ha-076992 kubelet[1304]: E0919 19:31:31.535893    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774291534622152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:31 ha-076992 kubelet[1304]: E0919 19:31:31.535937    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774291534622152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:41 ha-076992 kubelet[1304]: E0919 19:31:41.537584    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774301537350727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:41 ha-076992 kubelet[1304]: E0919 19:31:41.537608    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774301537350727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:51 ha-076992 kubelet[1304]: E0919 19:31:51.539335    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774311539054466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:31:51 ha-076992 kubelet[1304]: E0919 19:31:51.539392    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774311539054466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-076992 -n ha-076992
helpers_test.go:261: (dbg) Run:  kubectl --context ha-076992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (429.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-076992 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-076992 -v=7 --alsologtostderr
E0919 19:33:59.334670   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-076992 -v=7 --alsologtostderr: exit status 82 (2m1.901620126s)

                                                
                                                
-- stdout --
	* Stopping node "ha-076992-m04"  ...
	* Stopping node "ha-076992-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:32:02.139406   35115 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:32:02.139546   35115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:32:02.139555   35115 out.go:358] Setting ErrFile to fd 2...
	I0919 19:32:02.139560   35115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:32:02.139750   35115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:32:02.140046   35115 out.go:352] Setting JSON to false
	I0919 19:32:02.140148   35115 mustload.go:65] Loading cluster: ha-076992
	I0919 19:32:02.140561   35115 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:32:02.140640   35115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:32:02.140805   35115 mustload.go:65] Loading cluster: ha-076992
	I0919 19:32:02.140933   35115 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:32:02.140956   35115 stop.go:39] StopHost: ha-076992-m04
	I0919 19:32:02.141354   35115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:32:02.141391   35115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:32:02.156024   35115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44607
	I0919 19:32:02.156474   35115 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:32:02.156975   35115 main.go:141] libmachine: Using API Version  1
	I0919 19:32:02.157007   35115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:32:02.157416   35115 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:32:02.159928   35115 out.go:177] * Stopping node "ha-076992-m04"  ...
	I0919 19:32:02.161342   35115 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0919 19:32:02.161367   35115 main.go:141] libmachine: (ha-076992-m04) Calling .DriverName
	I0919 19:32:02.161587   35115 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0919 19:32:02.161614   35115 main.go:141] libmachine: (ha-076992-m04) Calling .GetSSHHostname
	I0919 19:32:02.164370   35115 main.go:141] libmachine: (ha-076992-m04) DBG | domain ha-076992-m04 has defined MAC address 52:54:00:e3:13:dd in network mk-ha-076992
	I0919 19:32:02.164842   35115 main.go:141] libmachine: (ha-076992-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:13:dd", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:28:29 +0000 UTC Type:0 Mac:52:54:00:e3:13:dd Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-076992-m04 Clientid:01:52:54:00:e3:13:dd}
	I0919 19:32:02.164870   35115 main.go:141] libmachine: (ha-076992-m04) DBG | domain ha-076992-m04 has defined IP address 192.168.39.157 and MAC address 52:54:00:e3:13:dd in network mk-ha-076992
	I0919 19:32:02.165026   35115 main.go:141] libmachine: (ha-076992-m04) Calling .GetSSHPort
	I0919 19:32:02.165215   35115 main.go:141] libmachine: (ha-076992-m04) Calling .GetSSHKeyPath
	I0919 19:32:02.165391   35115 main.go:141] libmachine: (ha-076992-m04) Calling .GetSSHUsername
	I0919 19:32:02.165518   35115 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m04/id_rsa Username:docker}
	I0919 19:32:02.257598   35115 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0919 19:32:02.311562   35115 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0919 19:32:02.365805   35115 main.go:141] libmachine: Stopping "ha-076992-m04"...
	I0919 19:32:02.365837   35115 main.go:141] libmachine: (ha-076992-m04) Calling .GetState
	I0919 19:32:02.367310   35115 main.go:141] libmachine: (ha-076992-m04) Calling .Stop
	I0919 19:32:02.370603   35115 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 0/120
	I0919 19:32:03.579274   35115 main.go:141] libmachine: (ha-076992-m04) Calling .GetState
	I0919 19:32:03.580710   35115 main.go:141] libmachine: Machine "ha-076992-m04" was stopped.
	I0919 19:32:03.580731   35115 stop.go:75] duration metric: took 1.419392145s to stop
	I0919 19:32:03.580774   35115 stop.go:39] StopHost: ha-076992-m03
	I0919 19:32:03.581225   35115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:32:03.581277   35115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:32:03.596057   35115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0919 19:32:03.596458   35115 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:32:03.596933   35115 main.go:141] libmachine: Using API Version  1
	I0919 19:32:03.596952   35115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:32:03.597278   35115 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:32:03.599463   35115 out.go:177] * Stopping node "ha-076992-m03"  ...
	I0919 19:32:03.600979   35115 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0919 19:32:03.600999   35115 main.go:141] libmachine: (ha-076992-m03) Calling .DriverName
	I0919 19:32:03.601231   35115 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0919 19:32:03.601251   35115 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHHostname
	I0919 19:32:03.604568   35115 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:32:03.604973   35115 main.go:141] libmachine: (ha-076992-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:be:a6", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:27:04 +0000 UTC Type:0 Mac:52:54:00:6a:be:a6 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-076992-m03 Clientid:01:52:54:00:6a:be:a6}
	I0919 19:32:03.605004   35115 main.go:141] libmachine: (ha-076992-m03) DBG | domain ha-076992-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:6a:be:a6 in network mk-ha-076992
	I0919 19:32:03.605136   35115 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHPort
	I0919 19:32:03.605303   35115 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHKeyPath
	I0919 19:32:03.605460   35115 main.go:141] libmachine: (ha-076992-m03) Calling .GetSSHUsername
	I0919 19:32:03.605580   35115 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m03/id_rsa Username:docker}
	I0919 19:32:03.694730   35115 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0919 19:32:03.751976   35115 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0919 19:32:03.807250   35115 main.go:141] libmachine: Stopping "ha-076992-m03"...
	I0919 19:32:03.807295   35115 main.go:141] libmachine: (ha-076992-m03) Calling .GetState
	I0919 19:32:03.808580   35115 main.go:141] libmachine: (ha-076992-m03) Calling .Stop
	I0919 19:32:03.811624   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 0/120
	I0919 19:32:04.813024   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 1/120
	I0919 19:32:05.814581   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 2/120
	I0919 19:32:06.816178   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 3/120
	I0919 19:32:07.817782   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 4/120
	I0919 19:32:08.819672   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 5/120
	I0919 19:32:09.821475   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 6/120
	I0919 19:32:10.823265   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 7/120
	I0919 19:32:11.825445   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 8/120
	I0919 19:32:12.826888   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 9/120
	I0919 19:32:13.828089   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 10/120
	I0919 19:32:14.829643   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 11/120
	I0919 19:32:15.831163   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 12/120
	I0919 19:32:16.832721   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 13/120
	I0919 19:32:17.834166   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 14/120
	I0919 19:32:18.835883   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 15/120
	I0919 19:32:19.837558   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 16/120
	I0919 19:32:20.839612   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 17/120
	I0919 19:32:21.841641   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 18/120
	I0919 19:32:22.842994   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 19/120
	I0919 19:32:23.844380   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 20/120
	I0919 19:32:24.845695   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 21/120
	I0919 19:32:25.847765   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 22/120
	I0919 19:32:26.849401   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 23/120
	I0919 19:32:27.851060   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 24/120
	I0919 19:32:28.853406   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 25/120
	I0919 19:32:29.854966   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 26/120
	I0919 19:32:30.856293   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 27/120
	I0919 19:32:31.857987   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 28/120
	I0919 19:32:32.859343   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 29/120
	I0919 19:32:33.861173   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 30/120
	I0919 19:32:34.862761   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 31/120
	I0919 19:32:35.864216   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 32/120
	I0919 19:32:36.865921   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 33/120
	I0919 19:32:37.867324   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 34/120
	I0919 19:32:38.868716   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 35/120
	I0919 19:32:39.870210   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 36/120
	I0919 19:32:40.871599   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 37/120
	I0919 19:32:41.873005   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 38/120
	I0919 19:32:42.874518   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 39/120
	I0919 19:32:43.876002   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 40/120
	I0919 19:32:44.877504   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 41/120
	I0919 19:32:45.879467   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 42/120
	I0919 19:32:46.880896   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 43/120
	I0919 19:32:47.882148   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 44/120
	I0919 19:32:48.883908   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 45/120
	I0919 19:32:49.885134   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 46/120
	I0919 19:32:50.886606   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 47/120
	I0919 19:32:51.887763   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 48/120
	I0919 19:32:52.889201   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 49/120
	I0919 19:32:53.890922   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 50/120
	I0919 19:32:54.892322   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 51/120
	I0919 19:32:55.893574   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 52/120
	I0919 19:32:56.894720   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 53/120
	I0919 19:32:57.895984   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 54/120
	I0919 19:32:58.897746   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 55/120
	I0919 19:32:59.899113   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 56/120
	I0919 19:33:00.900509   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 57/120
	I0919 19:33:01.901756   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 58/120
	I0919 19:33:02.903283   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 59/120
	I0919 19:33:03.905288   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 60/120
	I0919 19:33:04.906622   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 61/120
	I0919 19:33:05.907875   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 62/120
	I0919 19:33:06.909164   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 63/120
	I0919 19:33:07.910696   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 64/120
	I0919 19:33:08.912366   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 65/120
	I0919 19:33:09.913747   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 66/120
	I0919 19:33:10.915732   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 67/120
	I0919 19:33:11.917328   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 68/120
	I0919 19:33:12.918547   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 69/120
	I0919 19:33:13.920401   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 70/120
	I0919 19:33:14.921663   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 71/120
	I0919 19:33:15.922914   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 72/120
	I0919 19:33:16.924151   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 73/120
	I0919 19:33:17.925476   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 74/120
	I0919 19:33:18.927159   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 75/120
	I0919 19:33:19.928484   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 76/120
	I0919 19:33:20.930130   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 77/120
	I0919 19:33:21.931412   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 78/120
	I0919 19:33:22.932598   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 79/120
	I0919 19:33:23.934253   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 80/120
	I0919 19:33:24.935801   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 81/120
	I0919 19:33:25.937166   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 82/120
	I0919 19:33:26.938497   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 83/120
	I0919 19:33:27.939754   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 84/120
	I0919 19:33:28.941750   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 85/120
	I0919 19:33:29.943098   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 86/120
	I0919 19:33:30.944528   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 87/120
	I0919 19:33:31.945911   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 88/120
	I0919 19:33:32.947132   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 89/120
	I0919 19:33:33.948833   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 90/120
	I0919 19:33:34.950018   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 91/120
	I0919 19:33:35.951361   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 92/120
	I0919 19:33:36.952598   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 93/120
	I0919 19:33:37.954063   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 94/120
	I0919 19:33:38.955719   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 95/120
	I0919 19:33:39.956912   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 96/120
	I0919 19:33:40.958093   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 97/120
	I0919 19:33:41.959366   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 98/120
	I0919 19:33:42.960714   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 99/120
	I0919 19:33:43.962444   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 100/120
	I0919 19:33:44.964237   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 101/120
	I0919 19:33:45.965439   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 102/120
	I0919 19:33:46.967168   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 103/120
	I0919 19:33:47.968486   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 104/120
	I0919 19:33:48.970197   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 105/120
	I0919 19:33:49.971497   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 106/120
	I0919 19:33:50.972914   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 107/120
	I0919 19:33:51.974324   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 108/120
	I0919 19:33:52.975767   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 109/120
	I0919 19:33:53.977645   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 110/120
	I0919 19:33:54.978969   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 111/120
	I0919 19:33:55.981275   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 112/120
	I0919 19:33:56.983607   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 113/120
	I0919 19:33:57.984997   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 114/120
	I0919 19:33:58.987331   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 115/120
	I0919 19:33:59.988658   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 116/120
	I0919 19:34:00.990022   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 117/120
	I0919 19:34:01.991823   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 118/120
	I0919 19:34:02.993209   35115 main.go:141] libmachine: (ha-076992-m03) Waiting for machine to stop 119/120
	I0919 19:34:03.993724   35115 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0919 19:34:03.993777   35115 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 19:34:03.995551   35115 out.go:201] 
	W0919 19:34:03.996840   35115 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0919 19:34:03.996857   35115 out.go:270] * 
	* 
	W0919 19:34:03.999106   35115 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 19:34:04.000100   35115 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-076992 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-076992 --wait=true -v=7 --alsologtostderr
E0919 19:34:27.040969   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:38:59.335211   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-076992 --wait=true -v=7 --alsologtostderr: (5m5.371170565s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-076992
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-076992 -n ha-076992
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-076992 logs -n 25: (1.897633097s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m02:/home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m04 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp testdata/cp-test.txt                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992:/home/docker/cp-test_ha-076992-m04_ha-076992.txt                       |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992 sudo cat                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992.txt                                 |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m02:/home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03:/home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m03 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-076992 node stop m02 -v=7                                                     | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-076992 node start m02 -v=7                                                    | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-076992 -v=7                                                           | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-076992 -v=7                                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-076992 --wait=true -v=7                                                    | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:34 UTC | 19 Sep 24 19:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-076992                                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:39 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 19:34:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 19:34:04.045011   35612 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:34:04.045279   35612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:34:04.045288   35612 out.go:358] Setting ErrFile to fd 2...
	I0919 19:34:04.045291   35612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:34:04.045459   35612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:34:04.045994   35612 out.go:352] Setting JSON to false
	I0919 19:34:04.046891   35612 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4588,"bootTime":1726769856,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:34:04.046988   35612 start.go:139] virtualization: kvm guest
	I0919 19:34:04.049154   35612 out.go:177] * [ha-076992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:34:04.050341   35612 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:34:04.050350   35612 notify.go:220] Checking for updates...
	I0919 19:34:04.052730   35612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:34:04.053959   35612 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:34:04.055026   35612 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:34:04.056037   35612 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:34:04.057120   35612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:34:04.058750   35612 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:34:04.058834   35612 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:34:04.059303   35612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:34:04.059343   35612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:34:04.074403   35612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39207
	I0919 19:34:04.074797   35612 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:34:04.075316   35612 main.go:141] libmachine: Using API Version  1
	I0919 19:34:04.075340   35612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:34:04.075751   35612 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:34:04.075940   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:34:04.110065   35612 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 19:34:04.111249   35612 start.go:297] selected driver: kvm2
	I0919 19:34:04.111262   35612 start.go:901] validating driver "kvm2" against &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:34:04.111400   35612 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:34:04.111717   35612 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:34:04.111804   35612 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 19:34:04.127202   35612 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 19:34:04.128165   35612 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:34:04.128211   35612 cni.go:84] Creating CNI manager for ""
	I0919 19:34:04.128261   35612 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 19:34:04.128337   35612 start.go:340] cluster config:
	{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:34:04.128472   35612 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:34:04.130331   35612 out.go:177] * Starting "ha-076992" primary control-plane node in "ha-076992" cluster
	I0919 19:34:04.131735   35612 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:34:04.131785   35612 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 19:34:04.131800   35612 cache.go:56] Caching tarball of preloaded images
	I0919 19:34:04.131918   35612 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:34:04.131931   35612 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:34:04.132044   35612 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:34:04.132253   35612 start.go:360] acquireMachinesLock for ha-076992: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:34:04.132298   35612 start.go:364] duration metric: took 27.107µs to acquireMachinesLock for "ha-076992"
	I0919 19:34:04.132314   35612 start.go:96] Skipping create...Using existing machine configuration
	I0919 19:34:04.132322   35612 fix.go:54] fixHost starting: 
	I0919 19:34:04.132571   35612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:34:04.132600   35612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:34:04.147138   35612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39301
	I0919 19:34:04.147542   35612 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:34:04.148023   35612 main.go:141] libmachine: Using API Version  1
	I0919 19:34:04.148049   35612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:34:04.148367   35612 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:34:04.148598   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:34:04.148771   35612 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:34:04.150428   35612 fix.go:112] recreateIfNeeded on ha-076992: state=Running err=<nil>
	W0919 19:34:04.150449   35612 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 19:34:04.152612   35612 out.go:177] * Updating the running kvm2 "ha-076992" VM ...
	I0919 19:34:04.153913   35612 machine.go:93] provisionDockerMachine start ...
	I0919 19:34:04.153932   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:34:04.154146   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.157199   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.157687   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.157706   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.157843   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:34:04.158020   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.158147   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.158315   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:34:04.158486   35612 main.go:141] libmachine: Using SSH client type: native
	I0919 19:34:04.158697   35612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:34:04.158708   35612 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 19:34:04.262495   35612 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:34:04.262535   35612 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:34:04.262777   35612 buildroot.go:166] provisioning hostname "ha-076992"
	I0919 19:34:04.262805   35612 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:34:04.262983   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.265489   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.265882   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.265909   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.266078   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:34:04.266250   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.266390   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.266505   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:34:04.266624   35612 main.go:141] libmachine: Using SSH client type: native
	I0919 19:34:04.266852   35612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:34:04.266869   35612 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992 && echo "ha-076992" | sudo tee /etc/hostname
	I0919 19:34:04.385951   35612 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:34:04.385978   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.388928   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.389351   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.389379   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.389547   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:34:04.389710   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.389885   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.390034   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:34:04.390172   35612 main.go:141] libmachine: Using SSH client type: native
	I0919 19:34:04.390330   35612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:34:04.390345   35612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:34:04.490288   35612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:34:04.490326   35612 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:34:04.490378   35612 buildroot.go:174] setting up certificates
	I0919 19:34:04.490388   35612 provision.go:84] configureAuth start
	I0919 19:34:04.490400   35612 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:34:04.490643   35612 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:34:04.493445   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.493787   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.493815   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.493985   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.495866   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.496276   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.496301   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.496451   35612 provision.go:143] copyHostCerts
	I0919 19:34:04.496482   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:34:04.496521   35612 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:34:04.496529   35612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:34:04.496595   35612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:34:04.496680   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:34:04.496696   35612 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:34:04.496703   35612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:34:04.496727   35612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:34:04.496803   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:34:04.496823   35612 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:34:04.496828   35612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:34:04.496850   35612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:34:04.496914   35612 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992 san=[127.0.0.1 192.168.39.173 ha-076992 localhost minikube]
	I0919 19:34:04.695896   35612 provision.go:177] copyRemoteCerts
	I0919 19:34:04.695965   35612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:34:04.695993   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.698657   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.699041   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.699069   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.699252   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:34:04.699445   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.699607   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:34:04.699776   35612 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:34:04.781558   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:34:04.781640   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:34:04.809342   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:34:04.809417   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 19:34:04.836693   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:34:04.836777   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 19:34:04.863522   35612 provision.go:87] duration metric: took 373.112415ms to configureAuth
	I0919 19:34:04.863562   35612 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:34:04.863917   35612 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:34:04.864070   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.867216   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.867651   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.867677   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.867836   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:34:04.868019   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.868167   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.868299   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:34:04.868459   35612 main.go:141] libmachine: Using SSH client type: native
	I0919 19:34:04.868642   35612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:34:04.868659   35612 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:35:35.736663   35612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:35:35.736699   35612 machine.go:96] duration metric: took 1m31.582773469s to provisionDockerMachine
	I0919 19:35:35.736712   35612 start.go:293] postStartSetup for "ha-076992" (driver="kvm2")
	I0919 19:35:35.736726   35612 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:35:35.736745   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:35.737105   35612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:35:35.737142   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:35:35.740171   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.740643   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:35.740668   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.740830   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:35:35.741076   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:35.741263   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:35:35.741412   35612 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:35:35.824833   35612 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:35:35.829521   35612 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:35:35.829553   35612 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:35:35.829635   35612 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:35:35.829737   35612 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:35:35.829749   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:35:35.829862   35612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:35:35.839430   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:35:35.864494   35612 start.go:296] duration metric: took 127.769368ms for postStartSetup
	I0919 19:35:35.864537   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:35.864806   35612 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0919 19:35:35.864831   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:35:35.867614   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.868031   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:35.868051   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.868196   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:35:35.868344   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:35.868475   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:35:35.868623   35612 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	W0919 19:35:35.947990   35612 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0919 19:35:35.948018   35612 fix.go:56] duration metric: took 1m31.815695978s for fixHost
	I0919 19:35:35.948040   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:35:35.951001   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.951351   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:35.951379   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.951508   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:35:35.951666   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:35.951818   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:35.951993   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:35:35.952176   35612 main.go:141] libmachine: Using SSH client type: native
	I0919 19:35:35.952367   35612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:35:35.952380   35612 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:35:36.054112   35612 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726774536.022437873
	
	I0919 19:35:36.054137   35612 fix.go:216] guest clock: 1726774536.022437873
	I0919 19:35:36.054154   35612 fix.go:229] Guest: 2024-09-19 19:35:36.022437873 +0000 UTC Remote: 2024-09-19 19:35:35.9480247 +0000 UTC m=+91.938130215 (delta=74.413173ms)
	I0919 19:35:36.054205   35612 fix.go:200] guest clock delta is within tolerance: 74.413173ms
	I0919 19:35:36.054212   35612 start.go:83] releasing machines lock for "ha-076992", held for 1m31.921904362s
	I0919 19:35:36.054240   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:36.054496   35612 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:35:36.056877   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.057258   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:36.057321   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.057448   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:36.058036   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:36.058215   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:36.058311   35612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:35:36.058357   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:35:36.058401   35612 ssh_runner.go:195] Run: cat /version.json
	I0919 19:35:36.058425   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:35:36.061276   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.061548   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.061780   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:36.061801   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.061918   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:35:36.061959   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:36.061983   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.062079   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:36.062131   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:35:36.062430   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:35:36.062432   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:36.062616   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:35:36.062611   35612 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:35:36.062765   35612 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:35:36.159444   35612 ssh_runner.go:195] Run: systemctl --version
	I0919 19:35:36.165753   35612 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:35:36.324216   35612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:35:36.333136   35612 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:35:36.333202   35612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:35:36.342917   35612 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 19:35:36.342941   35612 start.go:495] detecting cgroup driver to use...
	I0919 19:35:36.343015   35612 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:35:36.360057   35612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:35:36.374750   35612 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:35:36.374816   35612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:35:36.389007   35612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:35:36.403039   35612 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:35:36.554664   35612 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:35:36.700712   35612 docker.go:233] disabling docker service ...
	I0919 19:35:36.700789   35612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:35:36.716809   35612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:35:36.730663   35612 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:35:36.872963   35612 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:35:37.017479   35612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:35:37.032027   35612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:35:37.049710   35612 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:35:37.049764   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.060158   35612 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:35:37.060252   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.070881   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.081722   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.092191   35612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:35:37.102727   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.113591   35612 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.124382   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.134409   35612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:35:37.143345   35612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:35:37.152486   35612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:35:37.292224   35612 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:35:41.818659   35612 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.52639635s)
	I0919 19:35:41.818693   35612 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:35:41.818747   35612 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:35:41.824935   35612 start.go:563] Will wait 60s for crictl version
	I0919 19:35:41.824995   35612 ssh_runner.go:195] Run: which crictl
	I0919 19:35:41.828800   35612 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:35:41.868007   35612 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:35:41.868091   35612 ssh_runner.go:195] Run: crio --version
	I0919 19:35:41.897790   35612 ssh_runner.go:195] Run: crio --version
	I0919 19:35:41.928421   35612 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:35:41.930102   35612 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:35:41.932877   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:41.933423   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:41.933458   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:41.933568   35612 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:35:41.938448   35612 kubeadm.go:883] updating cluster {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 19:35:41.938660   35612 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:35:41.938725   35612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:35:41.983134   35612 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:35:41.983159   35612 crio.go:433] Images already preloaded, skipping extraction
	I0919 19:35:41.983213   35612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:35:42.016823   35612 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:35:42.016845   35612 cache_images.go:84] Images are preloaded, skipping loading
	I0919 19:35:42.016853   35612 kubeadm.go:934] updating node { 192.168.39.173 8443 v1.31.1 crio true true} ...
	I0919 19:35:42.016950   35612 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:35:42.017008   35612 ssh_runner.go:195] Run: crio config
	I0919 19:35:42.069951   35612 cni.go:84] Creating CNI manager for ""
	I0919 19:35:42.069971   35612 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 19:35:42.069980   35612 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 19:35:42.070000   35612 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076992 NodeName:ha-076992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 19:35:42.070123   35612 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 19:35:42.070140   35612 kube-vip.go:115] generating kube-vip config ...
	I0919 19:35:42.070180   35612 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:35:42.082826   35612 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:35:42.082950   35612 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:35:42.083005   35612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:35:42.093786   35612 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 19:35:42.093842   35612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 19:35:42.103536   35612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0919 19:35:42.120038   35612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:35:42.136696   35612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0919 19:35:42.152987   35612 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:35:42.170154   35612 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:35:42.174784   35612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:35:42.335803   35612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:35:42.350997   35612 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.173
	I0919 19:35:42.351024   35612 certs.go:194] generating shared ca certs ...
	I0919 19:35:42.351040   35612 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:35:42.351237   35612 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:35:42.351293   35612 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:35:42.351309   35612 certs.go:256] generating profile certs ...
	I0919 19:35:42.351419   35612 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:35:42.351454   35612 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.580518db
	I0919 19:35:42.351487   35612 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.580518db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.232 192.168.39.66 192.168.39.254]
	I0919 19:35:42.710621   35612 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.580518db ...
	I0919 19:35:42.710653   35612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.580518db: {Name:mka21968dcff4ec4de345cb34b1a85027031721f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:35:42.710841   35612 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.580518db ...
	I0919 19:35:42.710853   35612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.580518db: {Name:mk6e7e419864b86fa4a72d9703cfc517cf6d9d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:35:42.710919   35612 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.580518db -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:35:42.711052   35612 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.580518db -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:35:42.711183   35612 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:35:42.711198   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:35:42.711211   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:35:42.711224   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:35:42.711237   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:35:42.711250   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:35:42.711262   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:35:42.711274   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:35:42.711285   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:35:42.711338   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:35:42.711366   35612 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:35:42.711376   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:35:42.711398   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:35:42.711421   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:35:42.711441   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:35:42.711477   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:35:42.711505   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:35:42.711518   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:35:42.711530   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:35:42.712064   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:35:42.738049   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:35:42.761697   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:35:42.786470   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:35:42.810647   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 19:35:42.834879   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 19:35:42.860209   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:35:42.885501   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:35:42.909808   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:35:42.933446   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:35:42.957769   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:35:42.981374   35612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 19:35:42.997806   35612 ssh_runner.go:195] Run: openssl version
	I0919 19:35:43.003967   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:35:43.014926   35612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:35:43.019762   35612 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:35:43.019820   35612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:35:43.025576   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:35:43.035164   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:35:43.046092   35612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:35:43.050733   35612 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:35:43.050777   35612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:35:43.056382   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:35:43.066161   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:35:43.077503   35612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:35:43.082423   35612 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:35:43.082472   35612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:35:43.088485   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:35:43.098416   35612 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:35:43.103155   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 19:35:43.109003   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 19:35:43.114566   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 19:35:43.120192   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 19:35:43.125770   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 19:35:43.131316   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 19:35:43.137014   35612 kubeadm.go:392] StartCluster: {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:35:43.137188   35612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 19:35:43.137243   35612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 19:35:43.178107   35612 cri.go:89] found id: "a2fc732004b04ea4a6ca212d7bc10b2d00a4a4d143d966ec9f87cc517e9d10d0"
	I0919 19:35:43.178129   35612 cri.go:89] found id: "1cf6eed5d6c49a78a045d5c52b9176fb4958fda7be711c94debacd6b78c95218"
	I0919 19:35:43.178133   35612 cri.go:89] found id: "8f9eddf8eefc0e3e2393d684dfb9c3349ddcceaafb9c51ed54961ea5da8caf71"
	I0919 19:35:43.178136   35612 cri.go:89] found id: "17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3"
	I0919 19:35:43.178139   35612 cri.go:89] found id: "cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0"
	I0919 19:35:43.178142   35612 cri.go:89] found id: "6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856"
	I0919 19:35:43.178145   35612 cri.go:89] found id: "d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4"
	I0919 19:35:43.178147   35612 cri.go:89] found id: "9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2"
	I0919 19:35:43.178150   35612 cri.go:89] found id: "3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389"
	I0919 19:35:43.178155   35612 cri.go:89] found id: "5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1"
	I0919 19:35:43.178157   35612 cri.go:89] found id: "f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501"
	I0919 19:35:43.178160   35612 cri.go:89] found id: "3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018"
	I0919 19:35:43.178162   35612 cri.go:89] found id: "5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b"
	I0919 19:35:43.178166   35612 cri.go:89] found id: ""
	I0919 19:35:43.178206   35612 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.087364008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774750087337305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e840c7aa-2b1d-4288-a6cb-368949ff3cf2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.087945171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd2a590a-d77d-44b9-951f-cb13050ca76d name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.088049557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd2a590a-d77d-44b9-951f-cb13050ca76d name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.088444812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726774630404511008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726774593449933479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726774593413510889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63df2e8772528c1c649ba71943a50c5a9584fc0c35d1e10002a0188afe543524,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726774588400631013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774582732602783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726774564362621411,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726774549595312188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726774549590216977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0
f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726774549543081171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db64
55320b4234dac25c23d10d7757629b3f372,PodSandboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774549399831645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726774549303480080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726774549256337526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726774549115414423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774544774674022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774089735408659,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950241312081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950179536093,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726773937822274967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726773937599657860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726773925364635092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726773925242908998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd2a590a-d77d-44b9-951f-cb13050ca76d name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.140767231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=589327e5-27e2-4105-b5c9-26cd33ee2167 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.140847753Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=589327e5-27e2-4105-b5c9-26cd33ee2167 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.142479592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96be2e2c-720a-4d55-ad9e-bd13f75c46e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.143026615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774750142948796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96be2e2c-720a-4d55-ad9e-bd13f75c46e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.143629635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8836278-b61c-432c-8a69-40c24ad7613b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.143691633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8836278-b61c-432c-8a69-40c24ad7613b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.144190287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726774630404511008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726774593449933479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726774593413510889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63df2e8772528c1c649ba71943a50c5a9584fc0c35d1e10002a0188afe543524,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726774588400631013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774582732602783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726774564362621411,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726774549595312188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726774549590216977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0
f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726774549543081171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db64
55320b4234dac25c23d10d7757629b3f372,PodSandboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774549399831645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726774549303480080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726774549256337526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726774549115414423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774544774674022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774089735408659,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950241312081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950179536093,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726773937822274967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726773937599657860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726773925364635092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726773925242908998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8836278-b61c-432c-8a69-40c24ad7613b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.201514721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed57b1db-9259-4286-85bd-45094dba77c7 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.201586220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed57b1db-9259-4286-85bd-45094dba77c7 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.202969730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02fefefd-9c5c-43ff-afcf-bf9daaa328b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.203688658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774750203656001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02fefefd-9c5c-43ff-afcf-bf9daaa328b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.204359982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba6f7cba-1e43-40d9-aad9-4d77b5dcf0d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.204419913Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba6f7cba-1e43-40d9-aad9-4d77b5dcf0d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.204822559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726774630404511008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726774593449933479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726774593413510889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63df2e8772528c1c649ba71943a50c5a9584fc0c35d1e10002a0188afe543524,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726774588400631013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774582732602783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726774564362621411,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726774549595312188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726774549590216977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0
f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726774549543081171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db64
55320b4234dac25c23d10d7757629b3f372,PodSandboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774549399831645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726774549303480080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726774549256337526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726774549115414423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774544774674022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774089735408659,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950241312081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950179536093,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726773937822274967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726773937599657860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726773925364635092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726773925242908998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba6f7cba-1e43-40d9-aad9-4d77b5dcf0d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.259387888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22a381f9-74f9-4619-a1c0-bf872483b0d7 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.259483983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22a381f9-74f9-4619-a1c0-bf872483b0d7 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.260518631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c2d7692-a60f-44d8-b4aa-086554bdaaa1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.261105029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774750261078034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c2d7692-a60f-44d8-b4aa-086554bdaaa1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.261896673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89f67e2b-eb6e-478d-8ea8-09801e4b5690 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.261965044Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89f67e2b-eb6e-478d-8ea8-09801e4b5690 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:39:10 ha-076992 crio[3621]: time="2024-09-19 19:39:10.264167337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726774630404511008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726774593449933479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726774593413510889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63df2e8772528c1c649ba71943a50c5a9584fc0c35d1e10002a0188afe543524,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726774588400631013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774582732602783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726774564362621411,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726774549595312188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726774549590216977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0
f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726774549543081171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db64
55320b4234dac25c23d10d7757629b3f372,PodSandboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774549399831645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726774549303480080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726774549256337526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726774549115414423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774544774674022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774089735408659,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950241312081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950179536093,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726773937822274967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726773937599657860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726773925364635092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726773925242908998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89f67e2b-eb6e-478d-8ea8-09801e4b5690 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	004cf0a26efe0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   79d0bd128843b       storage-provisioner
	44e35509c3580       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Running             kube-controller-manager   2                   db14226d4ecb0       kube-controller-manager-ha-076992
	2e1f4501fff9a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            3                   afc4e7e19236b       kube-apiserver-ha-076992
	63df2e8772528       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   79d0bd128843b       storage-provisioner
	b1cfb43f1ef0c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   8772b407d7c25       busybox-7dff88458-8wfb7
	4526c50933cab       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      3 minutes ago        Running             kube-vip                  0                   4f59647076dbb       kube-vip-ha-076992
	c412d5b70d043       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      3 minutes ago        Running             kube-proxy                1                   8209dcfdd30b4       kube-proxy-4d8dc
	6e386f72e5d37       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago        Running             kindnet-cni               1                   c194bf9cd1d21       kindnet-j846w
	cfb4ace0f3e59       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      3 minutes ago        Running             kube-scheduler            1                   e9e69a1062cea       kube-scheduler-ha-076992
	b344ac64a2b99       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago        Running             coredns                   1                   80031de6f8921       coredns-7c65d6cfc9-bst8x
	2810749ec6ddc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago        Running             etcd                      1                   fb62ba74ee7f1       etcd-ha-076992
	262c164bf25b4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      3 minutes ago        Exited              kube-controller-manager   1                   db14226d4ecb0       kube-controller-manager-ha-076992
	d6a80e0201608       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago        Exited              kube-apiserver            2                   afc4e7e19236b       kube-apiserver-ha-076992
	611497be6a620       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago        Running             coredns                   1                   257eb8bdca5fb       coredns-7c65d6cfc9-nbds4
	52db63dad4c31       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   a8aaf854df641       busybox-7dff88458-8wfb7
	17ef846dadbee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   8583d1eda759f       coredns-7c65d6cfc9-nbds4
	cbaa19f6b3857       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago       Exited              coredns                   0                   d65bb54e4c426       coredns-7c65d6cfc9-bst8x
	d623b5f012d8a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   0273544afdfa6       kindnet-j846w
	9d62ecb2cc70a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   2a6c6ac66a434       kube-proxy-4d8dc
	5745c8d186325       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   09b02f34308ad       kube-scheduler-ha-076992
	3beffc038ef33       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   fc5737a4c0f5c       etcd-ha-076992
	
	
	==> coredns [17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3] <==
	[INFO] 10.244.2.2:35304 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000093782s
	[INFO] 10.244.0.4:60710 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175542s
	[INFO] 10.244.0.4:56638 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002407779s
	[INFO] 10.244.1.2:60721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148724s
	[INFO] 10.244.2.2:40070 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138971s
	[INFO] 10.244.2.2:53394 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186542s
	[INFO] 10.244.2.2:54178 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000225634s
	[INFO] 10.244.2.2:53480 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001438271s
	[INFO] 10.244.2.2:48475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168626s
	[INFO] 10.244.2.2:49380 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160453s
	[INFO] 10.244.2.2:38326 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100289s
	[INFO] 10.244.1.2:47564 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107018s
	[INFO] 10.244.0.4:55521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119496s
	[INFO] 10.244.0.4:51830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118694s
	[INFO] 10.244.0.4:49301 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000181413s
	[INFO] 10.244.1.2:38961 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124955s
	[INFO] 10.244.1.2:37060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092863s
	[INFO] 10.244.1.2:44024 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085892s
	[INFO] 10.244.2.2:35688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014156s
	[INFO] 10.244.2.2:33974 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170311s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1419069143]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 19:36:00.363) (total time: 10001ms):
	Trace[1419069143]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:36:10.364)
	Trace[1419069143]: [10.001580214s] [10.001580214s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50042->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50042->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b344ac64a2b998915ace13c79db6455320b4234dac25c23d10d7757629b3f372] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0] <==
	[INFO] 10.244.1.2:60797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218519s
	[INFO] 10.244.1.2:44944 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001794304s
	[INFO] 10.244.1.2:51111 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185225s
	[INFO] 10.244.1.2:46956 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160685s
	[INFO] 10.244.1.2:36318 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001321241s
	[INFO] 10.244.1.2:53158 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118134s
	[INFO] 10.244.1.2:45995 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102925s
	[INFO] 10.244.2.2:55599 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001757807s
	[INFO] 10.244.0.4:50520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118756s
	[INFO] 10.244.0.4:48294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189838s
	[INFO] 10.244.0.4:52710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005729s
	[INFO] 10.244.0.4:56525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085763s
	[INFO] 10.244.1.2:43917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168832s
	[INFO] 10.244.1.2:34972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200932s
	[INFO] 10.244.1.2:50680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181389s
	[INFO] 10.244.2.2:51430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152587s
	[INFO] 10.244.2.2:37924 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000317695s
	[INFO] 10.244.2.2:46377 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000371446s
	[INFO] 10.244.2.2:36790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012815s
	[INFO] 10.244.0.4:35196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000409388s
	[INFO] 10.244.1.2:43265 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235404s
	[INFO] 10.244.2.2:56515 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113892s
	[INFO] 10.244.2.2:33574 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251263s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-076992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T19_25_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:39:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:36:32 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:36:32 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:36:32 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:36:32 +0000   Thu, 19 Sep 2024 19:25:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    ha-076992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 88962b0779f84ff6915974a39d1a24ba
	  System UUID:                88962b07-79f8-4ff6-9159-74a39d1a24ba
	  Boot ID:                    f4736dd6-fd6e-4dc3-b2ee-64f8773325ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8wfb7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-bst8x             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-nbds4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-076992                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-j846w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-076992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-076992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-4d8dc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-076992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-076992                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m37s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-076992 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-076992 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-076992 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-076992 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Warning  ContainerGCFailed        3m39s (x2 over 4m39s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m27s (x3 over 4m16s)  kubelet          Node ha-076992 status is now: NodeNotReady
	  Normal   RegisteredNode           2m41s                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           2m31s                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           38s                    node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	
	
	Name:               ha-076992-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_26_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:26:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:39:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:37:15 +0000   Thu, 19 Sep 2024 19:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:37:15 +0000   Thu, 19 Sep 2024 19:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:37:15 +0000   Thu, 19 Sep 2024 19:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:37:15 +0000   Thu, 19 Sep 2024 19:36:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-076992-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fbb92a6f6fa49d49b42ed70b015086d
	  System UUID:                7fbb92a6-f6fa-49d4-9b42-ed70b015086d
	  Boot ID:                    0fe45e85-4f9b-481a-8bc8-b98a6c8a000b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c64rv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-076992-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-6d8pz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-076992-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-076992-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-tjtfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-076992-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-076992-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m14s                kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-076992-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-076992-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-076992-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  NodeNotReady             9m10s                node-controller  Node ha-076992-m02 status is now: NodeNotReady
	  Normal  Starting                 3m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m4s (x8 over 3m4s)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x8 over 3m4s)  kubelet          Node ha-076992-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x7 over 3m4s)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m41s                node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  RegisteredNode           2m31s                node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  RegisteredNode           38s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	
	
	Name:               ha-076992-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_27_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:27:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:39:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:38:44 +0000   Thu, 19 Sep 2024 19:38:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:38:44 +0000   Thu, 19 Sep 2024 19:38:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:38:44 +0000   Thu, 19 Sep 2024 19:38:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:38:44 +0000   Thu, 19 Sep 2024 19:38:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.66
	  Hostname:    ha-076992-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0db72b5d16d8492b8f2f42e6cedd7691
	  System UUID:                0db72b5d-16d8-492b-8f2f-42e6cedd7691
	  Boot ID:                    f9fb96ad-0e0c-4922-8a39-7cd5aff72147
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jl6lr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-076992-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-89gmh                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-076992-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-076992-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-4qxzr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-076992-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-076992-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-076992-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-076992-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-076992-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal   RegisteredNode           2m41s              node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal   RegisteredNode           2m31s              node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	  Normal   NodeNotReady             2m                 node-controller  Node ha-076992-m03 status is now: NodeNotReady
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  56s (x2 over 56s)  kubelet          Node ha-076992-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x2 over 56s)  kubelet          Node ha-076992-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x2 over 56s)  kubelet          Node ha-076992-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 56s                kubelet          Node ha-076992-m03 has been rebooted, boot id: f9fb96ad-0e0c-4922-8a39-7cd5aff72147
	  Normal   NodeReady                56s                kubelet          Node ha-076992-m03 status is now: NodeReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-076992-m03 event: Registered Node ha-076992-m03 in Controller
	
	
	Name:               ha-076992-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_28_43_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:28:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:39:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:39:02 +0000   Thu, 19 Sep 2024 19:39:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:39:02 +0000   Thu, 19 Sep 2024 19:39:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:39:02 +0000   Thu, 19 Sep 2024 19:39:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:39:02 +0000   Thu, 19 Sep 2024 19:39:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-076992-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37704cd295b34d23a0864637f4482597
	  System UUID:                37704cd2-95b3-4d23-a086-4637f4482597
	  Boot ID:                    d8d01324-9af8-448e-92c0-f74eecf4a9a9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8jqvd       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-8gt7w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-076992-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-076992-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m41s              node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   RegisteredNode           2m31s              node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   NodeNotReady             2m                 node-controller  Node ha-076992-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-076992-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-076992-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-076992-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-076992-m04 has been rebooted, boot id: d8d01324-9af8-448e-92c0-f74eecf4a9a9
	  Normal   NodeReady                8s                 kubelet          Node ha-076992-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.418534] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.061113] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050106] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.181483] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.133235] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.281192] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.948588] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.762419] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.059014] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.974334] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.083682] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344336] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.503085] kauditd_printk_skb: 38 callbacks suppressed
	[Sep19 19:26] kauditd_printk_skb: 26 callbacks suppressed
	[Sep19 19:35] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +0.145564] systemd-fstab-generator[3559]: Ignoring "noauto" option for root device
	[  +0.177187] systemd-fstab-generator[3573]: Ignoring "noauto" option for root device
	[  +0.146656] systemd-fstab-generator[3585]: Ignoring "noauto" option for root device
	[  +0.269791] systemd-fstab-generator[3613]: Ignoring "noauto" option for root device
	[  +5.037197] systemd-fstab-generator[3707]: Ignoring "noauto" option for root device
	[  +0.092071] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.480192] kauditd_printk_skb: 22 callbacks suppressed
	[Sep19 19:36] kauditd_printk_skb: 87 callbacks suppressed
	[  +9.057023] kauditd_printk_skb: 1 callbacks suppressed
	[ +36.276079] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5] <==
	{"level":"warn","ts":"2024-09-19T19:38:09.148105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:38:09.247556Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"db356cbc19811e0e","from":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-19T19:38:10.167100Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a2ed4c579ed15809","rtt":"0s","error":"dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:10.167231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a2ed4c579ed15809","rtt":"0s","error":"dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:10.229368Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.66:2380/version","remote-member-id":"a2ed4c579ed15809","error":"Get \"https://192.168.39.66:2380/version\": dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:10.229510Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a2ed4c579ed15809","error":"Get \"https://192.168.39.66:2380/version\": dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:14.231660Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.66:2380/version","remote-member-id":"a2ed4c579ed15809","error":"Get \"https://192.168.39.66:2380/version\": dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:14.231822Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a2ed4c579ed15809","error":"Get \"https://192.168.39.66:2380/version\": dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:15.167798Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a2ed4c579ed15809","rtt":"0s","error":"dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:15.167901Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a2ed4c579ed15809","rtt":"0s","error":"dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:18.234520Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.66:2380/version","remote-member-id":"a2ed4c579ed15809","error":"Get \"https://192.168.39.66:2380/version\": dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:18.234708Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a2ed4c579ed15809","error":"Get \"https://192.168.39.66:2380/version\": dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:20.168458Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a2ed4c579ed15809","rtt":"0s","error":"dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:20.168553Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a2ed4c579ed15809","rtt":"0s","error":"dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:22.237151Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.66:2380/version","remote-member-id":"a2ed4c579ed15809","error":"Get \"https://192.168.39.66:2380/version\": dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:22.237215Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a2ed4c579ed15809","error":"Get \"https://192.168.39.66:2380/version\": dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-19T19:38:23.676225Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:38:23.677209Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:38:23.677553Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:38:23.700466Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"db356cbc19811e0e","to":"a2ed4c579ed15809","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-19T19:38:23.700548Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:38:23.711555Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"db356cbc19811e0e","to":"a2ed4c579ed15809","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-19T19:38:23.711635Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"warn","ts":"2024-09-19T19:38:25.169427Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a2ed4c579ed15809","rtt":"0s","error":"dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:25.169485Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a2ed4c579ed15809","rtt":"0s","error":"dial tcp 192.168.39.66:2380: connect: connection refused"}
	
	
	==> etcd [3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018] <==
	{"level":"warn","ts":"2024-09-19T19:34:05.000685Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:33:57.984059Z","time spent":"7.016617352s","remote":"127.0.0.1:50258","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	2024/09/19 19:34:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-19T19:34:05.059090Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.173:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T19:34:05.059148Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.173:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-19T19:34:05.059229Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"db356cbc19811e0e","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-19T19:34:05.059414Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059450Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059475Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059572Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059747Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059827Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059857Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059881Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.059909Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.059948Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.060101Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.060155Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.060201Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.060229Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.063297Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.173:2380"}
	{"level":"warn","ts":"2024-09-19T19:34:05.063419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.459547423s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-19T19:34:05.063462Z","caller":"traceutil/trace.go:171","msg":"trace[1076552976] range","detail":"{range_begin:; range_end:; }","duration":"1.459606135s","start":"2024-09-19T19:34:03.603849Z","end":"2024-09-19T19:34:05.063455Z","steps":["trace[1076552976] 'agreement among raft nodes before linearized reading'  (duration: 1.459545565s)"],"step_count":1}
	{"level":"error","ts":"2024-09-19T19:34:05.063513Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-19T19:34:05.063767Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.173:2380"}
	{"level":"info","ts":"2024-09-19T19:34:05.063803Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-076992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.173:2380"],"advertise-client-urls":["https://192.168.39.173:2379"]}
	
	
	==> kernel <==
	 19:39:11 up 14 min,  0 users,  load average: 0.31, 0.50, 0.30
	Linux ha-076992 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3] <==
	I0919 19:38:40.795744       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:38:50.789233       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:38:50.789349       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:38:50.789561       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:38:50.789591       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:38:50.789669       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:38:50.789688       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:38:50.789758       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:38:50.789877       1 main.go:299] handling current node
	I0919 19:39:00.797571       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:39:00.797737       1 main.go:299] handling current node
	I0919 19:39:00.797797       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:39:00.797828       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:39:00.798094       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:39:00.798165       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:39:00.798397       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:39:00.798429       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:39:10.788137       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:39:10.788238       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:39:10.788403       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:39:10.788452       1 main.go:299] handling current node
	I0919 19:39:10.788477       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:39:10.788494       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:39:10.788592       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:39:10.788620       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4] <==
	I0919 19:33:29.295959       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:33:39.295184       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:33:39.295230       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:33:39.295411       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:33:39.295476       1 main.go:299] handling current node
	I0919 19:33:39.295488       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:33:39.295493       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:33:39.295557       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:33:39.295579       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:33:49.295156       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:33:49.296241       1 main.go:299] handling current node
	I0919 19:33:49.296276       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:33:49.296295       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:33:49.296618       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:33:49.296661       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:33:49.296747       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:33:49.296766       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:33:59.295132       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:33:59.295211       1 main.go:299] handling current node
	I0919 19:33:59.295224       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:33:59.295231       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:33:59.295441       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:33:59.295467       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:33:59.295512       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:33:59.295518       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81] <==
	I0919 19:36:35.766650       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0919 19:36:35.824673       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0919 19:36:35.837452       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 19:36:35.837944       1 policy_source.go:224] refreshing policies
	I0919 19:36:35.849358       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 19:36:35.849409       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0919 19:36:35.850496       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 19:36:35.851296       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0919 19:36:35.851329       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0919 19:36:35.851431       1 shared_informer.go:320] Caches are synced for configmaps
	I0919 19:36:35.852209       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0919 19:36:35.856173       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0919 19:36:35.856256       1 aggregator.go:171] initial CRD sync complete...
	I0919 19:36:35.856277       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 19:36:35.856283       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 19:36:35.856287       1 cache.go:39] Caches are synced for autoregister controller
	I0919 19:36:35.857285       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0919 19:36:35.863397       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.232 192.168.39.66]
	I0919 19:36:35.864740       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 19:36:35.871148       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0919 19:36:35.873921       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0919 19:36:35.937513       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 19:36:36.757747       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 19:36:37.192835       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.173 192.168.39.232 192.168.39.66]
	W0919 19:36:47.188227       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.173 192.168.39.232]
	
	
	==> kube-apiserver [d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c] <==
	I0919 19:35:49.660318       1 options.go:228] external host was not specified, using 192.168.39.173
	I0919 19:35:49.674410       1 server.go:142] Version: v1.31.1
	I0919 19:35:49.674455       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:35:50.391038       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0919 19:35:50.403080       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 19:35:50.405606       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0919 19:35:50.405693       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0919 19:35:50.405948       1 instance.go:232] Using reconciler: lease
	W0919 19:36:10.392166       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0919 19:36:10.392220       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0919 19:36:10.409238       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0919 19:36:10.409357       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8] <==
	I0919 19:35:50.908423       1 serving.go:386] Generated self-signed cert in-memory
	I0919 19:35:51.291883       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0919 19:35:51.292124       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:35:51.294092       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 19:35:51.294354       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 19:35:51.294895       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0919 19:35:51.295062       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0919 19:36:11.416373       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.173:8443/healthz\": dial tcp 192.168.39.173:8443: connect: connection refused"
	
	
	==> kube-controller-manager [44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c] <==
	I0919 19:37:10.127253       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.442µs"
	I0919 19:37:14.217641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:37:15.351306       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:37:15.351566       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m03"
	I0919 19:37:24.306966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m03"
	I0919 19:37:25.432385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:37:29.983365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.134148ms"
	I0919 19:37:29.984182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="358.601µs"
	I0919 19:37:30.009654       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-7mwj2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-7mwj2\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 19:37:30.009898       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"be84277a-6ea8-41ad-906e-d906b7facc67", APIVersion:"v1", ResourceVersion:"250", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-7mwj2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-7mwj2": the object has been modified; please apply your changes to the latest version and try again
	I0919 19:37:39.971782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.236188ms"
	I0919 19:37:39.971893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="54.514µs"
	I0919 19:38:14.323237       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m03"
	I0919 19:38:14.348700       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m03"
	I0919 19:38:15.259947       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.876µs"
	I0919 19:38:15.295360       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m03"
	I0919 19:38:32.754567       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:38:32.854513       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:38:34.589527       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.353947ms"
	I0919 19:38:34.589809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="77.7µs"
	I0919 19:38:44.820595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m03"
	I0919 19:39:02.818972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:39:02.818961       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076992-m04"
	I0919 19:39:02.843654       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:39:04.223339       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	
	
	==> kube-proxy [9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2] <==
	E0919 19:32:59.926641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:32:59.926736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:32:59.926780       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:02.995893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:02.996046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:02.996268       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:02.996368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:06.068640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:06.069207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:09.139499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:09.139570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:09.139657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:09.139673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:18.357196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:18.357381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:21.427553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:21.428382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:21.429880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:21.429950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:42.933306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:42.933536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:46.004859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:46.005120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:46.005531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:46.005734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 19:35:51.955971       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:35:55.029038       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:35:58.100556       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:36:04.246747       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:36:16.531671       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0919 19:36:33.434609       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.173"]
	E0919 19:36:33.442335       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:36:33.526674       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 19:36:33.527103       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 19:36:33.527381       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:36:33.533680       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:36:33.534387       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:36:33.534496       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:36:33.538133       1 config.go:199] "Starting service config controller"
	I0919 19:36:33.538362       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:36:33.541156       1 config.go:328] "Starting node config controller"
	I0919 19:36:33.543065       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:36:33.540804       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:36:33.547059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:36:33.653079       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 19:36:33.653127       1 shared_informer.go:320] Caches are synced for service config
	I0919 19:36:33.653246       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1] <==
	I0919 19:25:32.657764       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 19:28:06.097590       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jl6lr\": pod busybox-7dff88458-jl6lr is already assigned to node \"ha-076992-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jl6lr" node="ha-076992-m03"
	E0919 19:28:06.098198       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3f7ee95d-11f9-4073-8fa9-d4aa5fc08d99(default/busybox-7dff88458-jl6lr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jl6lr"
	E0919 19:28:06.098359       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jl6lr\": pod busybox-7dff88458-jl6lr is already assigned to node \"ha-076992-m03\"" pod="default/busybox-7dff88458-jl6lr"
	I0919 19:28:06.098540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jl6lr" node="ha-076992-m03"
	E0919 19:28:06.176510       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8wfb7\": pod busybox-7dff88458-8wfb7 is already assigned to node \"ha-076992\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8wfb7" node="ha-076992"
	E0919 19:28:06.176725       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e9e5cd58-874f-41c6-8c0a-d37b5101a1f9(default/busybox-7dff88458-8wfb7) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8wfb7"
	E0919 19:28:06.181327       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8wfb7\": pod busybox-7dff88458-8wfb7 is already assigned to node \"ha-076992\"" pod="default/busybox-7dff88458-8wfb7"
	I0919 19:28:06.181857       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8wfb7" node="ha-076992"
	E0919 19:33:52.923314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0919 19:33:53.362928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0919 19:33:53.541834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0919 19:33:53.999402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0919 19:33:54.440532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0919 19:33:55.406824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0919 19:33:55.449844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0919 19:33:57.288297       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0919 19:33:58.181022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0919 19:33:59.711856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0919 19:34:00.368470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0919 19:34:00.983401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0919 19:34:01.252059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0919 19:34:01.432147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0919 19:34:01.632427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0919 19:34:04.973856       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cfb4ace0f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79] <==
	W0919 19:36:27.193182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.173:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:27.193269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.173:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:27.892679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.173:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:27.892752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.173:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:27.942935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.173:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:27.943129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.173:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:28.949196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.173:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:28.949299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.173:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.146329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.173:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.146436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.173:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.196546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.173:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.196612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.173:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.396468       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.173:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.396513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.173:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.925771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.173:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.926086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.173:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:30.435838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.173:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:30.436058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.173:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:32.617798       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.173:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:32.617869       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.173:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:33.195606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.173:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:33.195731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.173:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:35.776364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 19:36:35.776452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 19:36:48.923565       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 19:37:31 ha-076992 kubelet[1304]: E0919 19:37:31.606082    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774651605620840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:37:31 ha-076992 kubelet[1304]: E0919 19:37:31.606182    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774651605620840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:37:41 ha-076992 kubelet[1304]: E0919 19:37:41.612163    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774661609338299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:37:41 ha-076992 kubelet[1304]: E0919 19:37:41.612496    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774661609338299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:37:51 ha-076992 kubelet[1304]: E0919 19:37:51.616874    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774671615911272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:37:51 ha-076992 kubelet[1304]: E0919 19:37:51.616930    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774671615911272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:01 ha-076992 kubelet[1304]: E0919 19:38:01.621110    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774681620239227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:01 ha-076992 kubelet[1304]: E0919 19:38:01.621168    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774681620239227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:11 ha-076992 kubelet[1304]: E0919 19:38:11.622803    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774691622278644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:11 ha-076992 kubelet[1304]: E0919 19:38:11.622872    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774691622278644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:21 ha-076992 kubelet[1304]: E0919 19:38:21.627440    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774701625315375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:21 ha-076992 kubelet[1304]: E0919 19:38:21.627518    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774701625315375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:31 ha-076992 kubelet[1304]: E0919 19:38:31.409721    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 19 19:38:31 ha-076992 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 19:38:31 ha-076992 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 19:38:31 ha-076992 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 19:38:31 ha-076992 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 19:38:31 ha-076992 kubelet[1304]: E0919 19:38:31.632884    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774711632378187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:31 ha-076992 kubelet[1304]: E0919 19:38:31.632907    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774711632378187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:41 ha-076992 kubelet[1304]: E0919 19:38:41.634890    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774721634061391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:41 ha-076992 kubelet[1304]: E0919 19:38:41.634936    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774721634061391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:51 ha-076992 kubelet[1304]: E0919 19:38:51.638219    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774731637695808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:38:51 ha-076992 kubelet[1304]: E0919 19:38:51.638592    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774731637695808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:39:01 ha-076992 kubelet[1304]: E0919 19:39:01.644324    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774741640440505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:39:01 ha-076992 kubelet[1304]: E0919 19:39:01.645163    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774741640440505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:39:09.784850   37223 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19664-7917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-076992 -n ha-076992
helpers_test.go:261: (dbg) Run:  kubectl --context ha-076992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (429.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076992 stop -v=7 --alsologtostderr: exit status 82 (2m0.47314885s)

                                                
                                                
-- stdout --
	* Stopping node "ha-076992-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:39:29.344833   37653 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:39:29.344931   37653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:39:29.344936   37653 out.go:358] Setting ErrFile to fd 2...
	I0919 19:39:29.344941   37653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:39:29.345181   37653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:39:29.345425   37653 out.go:352] Setting JSON to false
	I0919 19:39:29.345498   37653 mustload.go:65] Loading cluster: ha-076992
	I0919 19:39:29.345872   37653 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:39:29.345961   37653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:39:29.346133   37653 mustload.go:65] Loading cluster: ha-076992
	I0919 19:39:29.346257   37653 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:39:29.346279   37653 stop.go:39] StopHost: ha-076992-m04
	I0919 19:39:29.346653   37653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:39:29.346694   37653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:39:29.361535   37653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42675
	I0919 19:39:29.361960   37653 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:39:29.362555   37653 main.go:141] libmachine: Using API Version  1
	I0919 19:39:29.362576   37653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:39:29.362978   37653 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:39:29.365276   37653 out.go:177] * Stopping node "ha-076992-m04"  ...
	I0919 19:39:29.366533   37653 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0919 19:39:29.366558   37653 main.go:141] libmachine: (ha-076992-m04) Calling .DriverName
	I0919 19:39:29.366776   37653 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0919 19:39:29.366815   37653 main.go:141] libmachine: (ha-076992-m04) Calling .GetSSHHostname
	I0919 19:39:29.369841   37653 main.go:141] libmachine: (ha-076992-m04) DBG | domain ha-076992-m04 has defined MAC address 52:54:00:e3:13:dd in network mk-ha-076992
	I0919 19:39:29.370226   37653 main.go:141] libmachine: (ha-076992-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:13:dd", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:38:58 +0000 UTC Type:0 Mac:52:54:00:e3:13:dd Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-076992-m04 Clientid:01:52:54:00:e3:13:dd}
	I0919 19:39:29.370259   37653 main.go:141] libmachine: (ha-076992-m04) DBG | domain ha-076992-m04 has defined IP address 192.168.39.157 and MAC address 52:54:00:e3:13:dd in network mk-ha-076992
	I0919 19:39:29.370374   37653 main.go:141] libmachine: (ha-076992-m04) Calling .GetSSHPort
	I0919 19:39:29.370540   37653 main.go:141] libmachine: (ha-076992-m04) Calling .GetSSHKeyPath
	I0919 19:39:29.370674   37653 main.go:141] libmachine: (ha-076992-m04) Calling .GetSSHUsername
	I0919 19:39:29.370859   37653 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992-m04/id_rsa Username:docker}
	I0919 19:39:29.457157   37653 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0919 19:39:29.511043   37653 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0919 19:39:29.564159   37653 main.go:141] libmachine: Stopping "ha-076992-m04"...
	I0919 19:39:29.564221   37653 main.go:141] libmachine: (ha-076992-m04) Calling .GetState
	I0919 19:39:29.565693   37653 main.go:141] libmachine: (ha-076992-m04) Calling .Stop
	I0919 19:39:29.568801   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 0/120
	I0919 19:39:30.570214   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 1/120
	I0919 19:39:31.571699   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 2/120
	I0919 19:39:32.573338   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 3/120
	I0919 19:39:33.575786   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 4/120
	I0919 19:39:34.577439   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 5/120
	I0919 19:39:35.578707   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 6/120
	I0919 19:39:36.580069   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 7/120
	I0919 19:39:37.581820   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 8/120
	I0919 19:39:38.583650   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 9/120
	I0919 19:39:39.584963   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 10/120
	I0919 19:39:40.586520   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 11/120
	I0919 19:39:41.587993   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 12/120
	I0919 19:39:42.589935   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 13/120
	I0919 19:39:43.591454   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 14/120
	I0919 19:39:44.593280   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 15/120
	I0919 19:39:45.595652   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 16/120
	I0919 19:39:46.597155   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 17/120
	I0919 19:39:47.598451   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 18/120
	I0919 19:39:48.599985   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 19/120
	I0919 19:39:49.601291   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 20/120
	I0919 19:39:50.602697   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 21/120
	I0919 19:39:51.603946   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 22/120
	I0919 19:39:52.605190   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 23/120
	I0919 19:39:53.606616   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 24/120
	I0919 19:39:54.608150   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 25/120
	I0919 19:39:55.610450   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 26/120
	I0919 19:39:56.612031   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 27/120
	I0919 19:39:57.613542   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 28/120
	I0919 19:39:58.615609   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 29/120
	I0919 19:39:59.617283   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 30/120
	I0919 19:40:00.619665   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 31/120
	I0919 19:40:01.621721   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 32/120
	I0919 19:40:02.623566   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 33/120
	I0919 19:40:03.624942   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 34/120
	I0919 19:40:04.627251   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 35/120
	I0919 19:40:05.628568   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 36/120
	I0919 19:40:06.629842   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 37/120
	I0919 19:40:07.631512   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 38/120
	I0919 19:40:08.632730   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 39/120
	I0919 19:40:09.633908   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 40/120
	I0919 19:40:10.635414   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 41/120
	I0919 19:40:11.637502   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 42/120
	I0919 19:40:12.639720   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 43/120
	I0919 19:40:13.641827   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 44/120
	I0919 19:40:14.643442   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 45/120
	I0919 19:40:15.644839   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 46/120
	I0919 19:40:16.646262   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 47/120
	I0919 19:40:17.647745   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 48/120
	I0919 19:40:18.649242   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 49/120
	I0919 19:40:19.650865   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 50/120
	I0919 19:40:20.652670   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 51/120
	I0919 19:40:21.654529   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 52/120
	I0919 19:40:22.655966   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 53/120
	I0919 19:40:23.657387   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 54/120
	I0919 19:40:24.659596   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 55/120
	I0919 19:40:25.660908   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 56/120
	I0919 19:40:26.663108   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 57/120
	I0919 19:40:27.664599   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 58/120
	I0919 19:40:28.665945   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 59/120
	I0919 19:40:29.668062   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 60/120
	I0919 19:40:30.669491   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 61/120
	I0919 19:40:31.671643   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 62/120
	I0919 19:40:32.673669   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 63/120
	I0919 19:40:33.675585   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 64/120
	I0919 19:40:34.677542   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 65/120
	I0919 19:40:35.678765   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 66/120
	I0919 19:40:36.680158   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 67/120
	I0919 19:40:37.681405   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 68/120
	I0919 19:40:38.683656   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 69/120
	I0919 19:40:39.685665   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 70/120
	I0919 19:40:40.687638   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 71/120
	I0919 19:40:41.689238   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 72/120
	I0919 19:40:42.691807   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 73/120
	I0919 19:40:43.693209   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 74/120
	I0919 19:40:44.695390   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 75/120
	I0919 19:40:45.696777   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 76/120
	I0919 19:40:46.698147   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 77/120
	I0919 19:40:47.699592   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 78/120
	I0919 19:40:48.700770   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 79/120
	I0919 19:40:49.702043   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 80/120
	I0919 19:40:50.703434   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 81/120
	I0919 19:40:51.704749   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 82/120
	I0919 19:40:52.706031   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 83/120
	I0919 19:40:53.707305   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 84/120
	I0919 19:40:54.709446   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 85/120
	I0919 19:40:55.711065   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 86/120
	I0919 19:40:56.712334   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 87/120
	I0919 19:40:57.714371   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 88/120
	I0919 19:40:58.715695   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 89/120
	I0919 19:40:59.717772   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 90/120
	I0919 19:41:00.719039   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 91/120
	I0919 19:41:01.720543   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 92/120
	I0919 19:41:02.722045   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 93/120
	I0919 19:41:03.723755   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 94/120
	I0919 19:41:04.725558   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 95/120
	I0919 19:41:05.726778   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 96/120
	I0919 19:41:06.728136   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 97/120
	I0919 19:41:07.729509   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 98/120
	I0919 19:41:08.730846   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 99/120
	I0919 19:41:09.732971   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 100/120
	I0919 19:41:10.734976   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 101/120
	I0919 19:41:11.736701   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 102/120
	I0919 19:41:12.737929   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 103/120
	I0919 19:41:13.739582   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 104/120
	I0919 19:41:14.741919   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 105/120
	I0919 19:41:15.743418   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 106/120
	I0919 19:41:16.744776   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 107/120
	I0919 19:41:17.746291   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 108/120
	I0919 19:41:18.747643   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 109/120
	I0919 19:41:19.749968   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 110/120
	I0919 19:41:20.751592   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 111/120
	I0919 19:41:21.753139   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 112/120
	I0919 19:41:22.754394   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 113/120
	I0919 19:41:23.755924   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 114/120
	I0919 19:41:24.758034   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 115/120
	I0919 19:41:25.760195   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 116/120
	I0919 19:41:26.761543   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 117/120
	I0919 19:41:27.763901   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 118/120
	I0919 19:41:28.765328   37653 main.go:141] libmachine: (ha-076992-m04) Waiting for machine to stop 119/120
	I0919 19:41:29.766462   37653 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0919 19:41:29.766541   37653 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 19:41:29.768692   37653 out.go:201] 
	W0919 19:41:29.770152   37653 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0919 19:41:29.770168   37653 out.go:270] * 
	* 
	W0919 19:41:29.772280   37653 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 19:41:29.773917   37653 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-076992 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr: (18.929373688s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-076992 -n ha-076992
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-076992 logs -n 25: (1.69063422s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m04 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp testdata/cp-test.txt                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992:/home/docker/cp-test_ha-076992-m04_ha-076992.txt                       |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992 sudo cat                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992.txt                                 |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m02:/home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03:/home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m03 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-076992 node stop m02 -v=7                                                     | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-076992 node start m02 -v=7                                                    | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-076992 -v=7                                                           | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-076992 -v=7                                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-076992 --wait=true -v=7                                                    | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:34 UTC | 19 Sep 24 19:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-076992                                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:39 UTC |                     |
	| node    | ha-076992 node delete m03 -v=7                                                   | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:39 UTC | 19 Sep 24 19:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-076992 stop -v=7                                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 19:34:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 19:34:04.045011   35612 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:34:04.045279   35612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:34:04.045288   35612 out.go:358] Setting ErrFile to fd 2...
	I0919 19:34:04.045291   35612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:34:04.045459   35612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:34:04.045994   35612 out.go:352] Setting JSON to false
	I0919 19:34:04.046891   35612 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4588,"bootTime":1726769856,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:34:04.046988   35612 start.go:139] virtualization: kvm guest
	I0919 19:34:04.049154   35612 out.go:177] * [ha-076992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:34:04.050341   35612 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:34:04.050350   35612 notify.go:220] Checking for updates...
	I0919 19:34:04.052730   35612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:34:04.053959   35612 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:34:04.055026   35612 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:34:04.056037   35612 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:34:04.057120   35612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:34:04.058750   35612 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:34:04.058834   35612 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:34:04.059303   35612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:34:04.059343   35612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:34:04.074403   35612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39207
	I0919 19:34:04.074797   35612 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:34:04.075316   35612 main.go:141] libmachine: Using API Version  1
	I0919 19:34:04.075340   35612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:34:04.075751   35612 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:34:04.075940   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:34:04.110065   35612 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 19:34:04.111249   35612 start.go:297] selected driver: kvm2
	I0919 19:34:04.111262   35612 start.go:901] validating driver "kvm2" against &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:34:04.111400   35612 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:34:04.111717   35612 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:34:04.111804   35612 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 19:34:04.127202   35612 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 19:34:04.128165   35612 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:34:04.128211   35612 cni.go:84] Creating CNI manager for ""
	I0919 19:34:04.128261   35612 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 19:34:04.128337   35612 start.go:340] cluster config:
	{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:34:04.128472   35612 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:34:04.130331   35612 out.go:177] * Starting "ha-076992" primary control-plane node in "ha-076992" cluster
	I0919 19:34:04.131735   35612 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:34:04.131785   35612 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 19:34:04.131800   35612 cache.go:56] Caching tarball of preloaded images
	I0919 19:34:04.131918   35612 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:34:04.131931   35612 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:34:04.132044   35612 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:34:04.132253   35612 start.go:360] acquireMachinesLock for ha-076992: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:34:04.132298   35612 start.go:364] duration metric: took 27.107µs to acquireMachinesLock for "ha-076992"
	I0919 19:34:04.132314   35612 start.go:96] Skipping create...Using existing machine configuration
	I0919 19:34:04.132322   35612 fix.go:54] fixHost starting: 
	I0919 19:34:04.132571   35612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:34:04.132600   35612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:34:04.147138   35612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39301
	I0919 19:34:04.147542   35612 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:34:04.148023   35612 main.go:141] libmachine: Using API Version  1
	I0919 19:34:04.148049   35612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:34:04.148367   35612 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:34:04.148598   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:34:04.148771   35612 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:34:04.150428   35612 fix.go:112] recreateIfNeeded on ha-076992: state=Running err=<nil>
	W0919 19:34:04.150449   35612 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 19:34:04.152612   35612 out.go:177] * Updating the running kvm2 "ha-076992" VM ...
	I0919 19:34:04.153913   35612 machine.go:93] provisionDockerMachine start ...
	I0919 19:34:04.153932   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:34:04.154146   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.157199   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.157687   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.157706   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.157843   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:34:04.158020   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.158147   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.158315   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:34:04.158486   35612 main.go:141] libmachine: Using SSH client type: native
	I0919 19:34:04.158697   35612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:34:04.158708   35612 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 19:34:04.262495   35612 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:34:04.262535   35612 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:34:04.262777   35612 buildroot.go:166] provisioning hostname "ha-076992"
	I0919 19:34:04.262805   35612 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:34:04.262983   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.265489   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.265882   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.265909   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.266078   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:34:04.266250   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.266390   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.266505   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:34:04.266624   35612 main.go:141] libmachine: Using SSH client type: native
	I0919 19:34:04.266852   35612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:34:04.266869   35612 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992 && echo "ha-076992" | sudo tee /etc/hostname
	I0919 19:34:04.385951   35612 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:34:04.385978   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.388928   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.389351   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.389379   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.389547   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:34:04.389710   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.389885   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.390034   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:34:04.390172   35612 main.go:141] libmachine: Using SSH client type: native
	I0919 19:34:04.390330   35612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:34:04.390345   35612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:34:04.490288   35612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:34:04.490326   35612 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:34:04.490378   35612 buildroot.go:174] setting up certificates
	I0919 19:34:04.490388   35612 provision.go:84] configureAuth start
	I0919 19:34:04.490400   35612 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:34:04.490643   35612 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:34:04.493445   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.493787   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.493815   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.493985   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.495866   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.496276   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.496301   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.496451   35612 provision.go:143] copyHostCerts
	I0919 19:34:04.496482   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:34:04.496521   35612 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:34:04.496529   35612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:34:04.496595   35612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:34:04.496680   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:34:04.496696   35612 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:34:04.496703   35612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:34:04.496727   35612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:34:04.496803   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:34:04.496823   35612 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:34:04.496828   35612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:34:04.496850   35612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:34:04.496914   35612 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992 san=[127.0.0.1 192.168.39.173 ha-076992 localhost minikube]
	I0919 19:34:04.695896   35612 provision.go:177] copyRemoteCerts
	I0919 19:34:04.695965   35612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:34:04.695993   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.698657   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.699041   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.699069   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.699252   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:34:04.699445   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.699607   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:34:04.699776   35612 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:34:04.781558   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:34:04.781640   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:34:04.809342   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:34:04.809417   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 19:34:04.836693   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:34:04.836777   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 19:34:04.863522   35612 provision.go:87] duration metric: took 373.112415ms to configureAuth
	I0919 19:34:04.863562   35612 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:34:04.863917   35612 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:34:04.864070   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:34:04.867216   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.867651   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:34:04.867677   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:34:04.867836   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:34:04.868019   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.868167   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:34:04.868299   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:34:04.868459   35612 main.go:141] libmachine: Using SSH client type: native
	I0919 19:34:04.868642   35612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:34:04.868659   35612 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:35:35.736663   35612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:35:35.736699   35612 machine.go:96] duration metric: took 1m31.582773469s to provisionDockerMachine
	I0919 19:35:35.736712   35612 start.go:293] postStartSetup for "ha-076992" (driver="kvm2")
	I0919 19:35:35.736726   35612 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:35:35.736745   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:35.737105   35612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:35:35.737142   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:35:35.740171   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.740643   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:35.740668   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.740830   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:35:35.741076   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:35.741263   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:35:35.741412   35612 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:35:35.824833   35612 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:35:35.829521   35612 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:35:35.829553   35612 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:35:35.829635   35612 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:35:35.829737   35612 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:35:35.829749   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:35:35.829862   35612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:35:35.839430   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:35:35.864494   35612 start.go:296] duration metric: took 127.769368ms for postStartSetup
	I0919 19:35:35.864537   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:35.864806   35612 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0919 19:35:35.864831   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:35:35.867614   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.868031   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:35.868051   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.868196   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:35:35.868344   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:35.868475   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:35:35.868623   35612 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	W0919 19:35:35.947990   35612 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0919 19:35:35.948018   35612 fix.go:56] duration metric: took 1m31.815695978s for fixHost
	I0919 19:35:35.948040   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:35:35.951001   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.951351   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:35.951379   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:35.951508   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:35:35.951666   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:35.951818   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:35.951993   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:35:35.952176   35612 main.go:141] libmachine: Using SSH client type: native
	I0919 19:35:35.952367   35612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:35:35.952380   35612 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:35:36.054112   35612 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726774536.022437873
	
	I0919 19:35:36.054137   35612 fix.go:216] guest clock: 1726774536.022437873
	I0919 19:35:36.054154   35612 fix.go:229] Guest: 2024-09-19 19:35:36.022437873 +0000 UTC Remote: 2024-09-19 19:35:35.9480247 +0000 UTC m=+91.938130215 (delta=74.413173ms)
	I0919 19:35:36.054205   35612 fix.go:200] guest clock delta is within tolerance: 74.413173ms
	I0919 19:35:36.054212   35612 start.go:83] releasing machines lock for "ha-076992", held for 1m31.921904362s
	I0919 19:35:36.054240   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:36.054496   35612 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:35:36.056877   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.057258   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:36.057321   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.057448   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:36.058036   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:36.058215   35612 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:35:36.058311   35612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:35:36.058357   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:35:36.058401   35612 ssh_runner.go:195] Run: cat /version.json
	I0919 19:35:36.058425   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:35:36.061276   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.061548   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.061780   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:36.061801   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.061918   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:35:36.061959   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:36.061983   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:36.062079   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:36.062131   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:35:36.062430   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:35:36.062432   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:35:36.062616   35612 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:35:36.062611   35612 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:35:36.062765   35612 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:35:36.159444   35612 ssh_runner.go:195] Run: systemctl --version
	I0919 19:35:36.165753   35612 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:35:36.324216   35612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:35:36.333136   35612 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:35:36.333202   35612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:35:36.342917   35612 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 19:35:36.342941   35612 start.go:495] detecting cgroup driver to use...
	I0919 19:35:36.343015   35612 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:35:36.360057   35612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:35:36.374750   35612 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:35:36.374816   35612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:35:36.389007   35612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:35:36.403039   35612 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:35:36.554664   35612 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:35:36.700712   35612 docker.go:233] disabling docker service ...
	I0919 19:35:36.700789   35612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:35:36.716809   35612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:35:36.730663   35612 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:35:36.872963   35612 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:35:37.017479   35612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:35:37.032027   35612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:35:37.049710   35612 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:35:37.049764   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.060158   35612 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:35:37.060252   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.070881   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.081722   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.092191   35612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:35:37.102727   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.113591   35612 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.124382   35612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:35:37.134409   35612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:35:37.143345   35612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:35:37.152486   35612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:35:37.292224   35612 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:35:41.818659   35612 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.52639635s)
	I0919 19:35:41.818693   35612 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:35:41.818747   35612 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:35:41.824935   35612 start.go:563] Will wait 60s for crictl version
	I0919 19:35:41.824995   35612 ssh_runner.go:195] Run: which crictl
	I0919 19:35:41.828800   35612 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:35:41.868007   35612 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:35:41.868091   35612 ssh_runner.go:195] Run: crio --version
	I0919 19:35:41.897790   35612 ssh_runner.go:195] Run: crio --version
	I0919 19:35:41.928421   35612 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:35:41.930102   35612 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:35:41.932877   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:41.933423   35612 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:35:41.933458   35612 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:35:41.933568   35612 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:35:41.938448   35612 kubeadm.go:883] updating cluster {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 19:35:41.938660   35612 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:35:41.938725   35612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:35:41.983134   35612 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:35:41.983159   35612 crio.go:433] Images already preloaded, skipping extraction
	I0919 19:35:41.983213   35612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:35:42.016823   35612 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:35:42.016845   35612 cache_images.go:84] Images are preloaded, skipping loading
	I0919 19:35:42.016853   35612 kubeadm.go:934] updating node { 192.168.39.173 8443 v1.31.1 crio true true} ...
	I0919 19:35:42.016950   35612 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:35:42.017008   35612 ssh_runner.go:195] Run: crio config
	I0919 19:35:42.069951   35612 cni.go:84] Creating CNI manager for ""
	I0919 19:35:42.069971   35612 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 19:35:42.069980   35612 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 19:35:42.070000   35612 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076992 NodeName:ha-076992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 19:35:42.070123   35612 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 19:35:42.070140   35612 kube-vip.go:115] generating kube-vip config ...
	I0919 19:35:42.070180   35612 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:35:42.082826   35612 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:35:42.082950   35612 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:35:42.083005   35612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:35:42.093786   35612 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 19:35:42.093842   35612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 19:35:42.103536   35612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0919 19:35:42.120038   35612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:35:42.136696   35612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0919 19:35:42.152987   35612 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:35:42.170154   35612 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:35:42.174784   35612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:35:42.335803   35612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:35:42.350997   35612 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.173
	I0919 19:35:42.351024   35612 certs.go:194] generating shared ca certs ...
	I0919 19:35:42.351040   35612 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:35:42.351237   35612 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:35:42.351293   35612 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:35:42.351309   35612 certs.go:256] generating profile certs ...
	I0919 19:35:42.351419   35612 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:35:42.351454   35612 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.580518db
	I0919 19:35:42.351487   35612 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.580518db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.232 192.168.39.66 192.168.39.254]
	I0919 19:35:42.710621   35612 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.580518db ...
	I0919 19:35:42.710653   35612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.580518db: {Name:mka21968dcff4ec4de345cb34b1a85027031721f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:35:42.710841   35612 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.580518db ...
	I0919 19:35:42.710853   35612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.580518db: {Name:mk6e7e419864b86fa4a72d9703cfc517cf6d9d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:35:42.710919   35612 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.580518db -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:35:42.711052   35612 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.580518db -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:35:42.711183   35612 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:35:42.711198   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:35:42.711211   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:35:42.711224   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:35:42.711237   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:35:42.711250   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:35:42.711262   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:35:42.711274   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:35:42.711285   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:35:42.711338   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:35:42.711366   35612 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:35:42.711376   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:35:42.711398   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:35:42.711421   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:35:42.711441   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:35:42.711477   35612 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:35:42.711505   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:35:42.711518   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:35:42.711530   35612 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:35:42.712064   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:35:42.738049   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:35:42.761697   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:35:42.786470   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:35:42.810647   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 19:35:42.834879   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 19:35:42.860209   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:35:42.885501   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:35:42.909808   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:35:42.933446   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:35:42.957769   35612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:35:42.981374   35612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 19:35:42.997806   35612 ssh_runner.go:195] Run: openssl version
	I0919 19:35:43.003967   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:35:43.014926   35612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:35:43.019762   35612 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:35:43.019820   35612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:35:43.025576   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:35:43.035164   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:35:43.046092   35612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:35:43.050733   35612 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:35:43.050777   35612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:35:43.056382   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:35:43.066161   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:35:43.077503   35612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:35:43.082423   35612 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:35:43.082472   35612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:35:43.088485   35612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:35:43.098416   35612 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:35:43.103155   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 19:35:43.109003   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 19:35:43.114566   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 19:35:43.120192   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 19:35:43.125770   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 19:35:43.131316   35612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 19:35:43.137014   35612 kubeadm.go:392] StartCluster: {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:35:43.137188   35612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 19:35:43.137243   35612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 19:35:43.178107   35612 cri.go:89] found id: "a2fc732004b04ea4a6ca212d7bc10b2d00a4a4d143d966ec9f87cc517e9d10d0"
	I0919 19:35:43.178129   35612 cri.go:89] found id: "1cf6eed5d6c49a78a045d5c52b9176fb4958fda7be711c94debacd6b78c95218"
	I0919 19:35:43.178133   35612 cri.go:89] found id: "8f9eddf8eefc0e3e2393d684dfb9c3349ddcceaafb9c51ed54961ea5da8caf71"
	I0919 19:35:43.178136   35612 cri.go:89] found id: "17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3"
	I0919 19:35:43.178139   35612 cri.go:89] found id: "cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0"
	I0919 19:35:43.178142   35612 cri.go:89] found id: "6eb7d5748986222523d03124d3b8e8c97cdd0739b7e1fde36fe7b29c8208f856"
	I0919 19:35:43.178145   35612 cri.go:89] found id: "d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4"
	I0919 19:35:43.178147   35612 cri.go:89] found id: "9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2"
	I0919 19:35:43.178150   35612 cri.go:89] found id: "3132b4bb29e16598dcf9e2080a666c00abe7e3c5eef744d468c6f5681fa2c389"
	I0919 19:35:43.178155   35612 cri.go:89] found id: "5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1"
	I0919 19:35:43.178157   35612 cri.go:89] found id: "f7da5064b19f5ac8d1743758ed65a853a3e2d5fe6fa3638ee3be69d83b4e2501"
	I0919 19:35:43.178160   35612 cri.go:89] found id: "3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018"
	I0919 19:35:43.178162   35612 cri.go:89] found id: "5b605d500b3ee7e774bf27efde8792514a803dca04b3c4678bb85ce95badda4b"
	I0919 19:35:43.178166   35612 cri.go:89] found id: ""
	I0919 19:35:43.178206   35612 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.356047379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726774630404511008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726774593449933479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726774593413510889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63df2e8772528c1c649ba71943a50c5a9584fc0c35d1e10002a0188afe543524,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726774588400631013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774582732602783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726774564362621411,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726774549595312188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726774549590216977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0
f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726774549543081171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db64
55320b4234dac25c23d10d7757629b3f372,PodSandboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774549399831645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726774549303480080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726774549256337526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726774549115414423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774544774674022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774089735408659,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950241312081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950179536093,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726773937822274967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726773937599657860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726773925364635092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726773925242908998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ca6a2de-4655-4d76-9de0-39e67144dd06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.387342871Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cb547b2-bffa-4afe-815c-185d6a64be78 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.387847567Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-8wfb7,Uid:e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774582563941602,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:28:06.143892361Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-076992,Uid:22afd76430fe0849caa93fde9d59c02f,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726774564259159318,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{kubernetes.io/config.hash: 22afd76430fe0849caa93fde9d59c02f,kubernetes.io/config.seen: 2024-09-19T19:35:42.139429547Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&PodSandboxMetadata{Name:etcd-ha-076992,Uid:79b7783d18d62d18697a4d1aa0ff5755,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548885610793,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.173:2
379,kubernetes.io/config.hash: 79b7783d18d62d18697a4d1aa0ff5755,kubernetes.io/config.seen: 2024-09-19T19:25:31.378577774Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&PodSandboxMetadata{Name:kube-proxy-4d8dc,Uid:4d522b18-9ae7-46a9-a6c7-e1560a1822de,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548872048051,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:35.640844315Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-076992,Uid:b693200c7b44
d836573bbd57560a83e1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548871324426,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b693200c7b44d836573bbd57560a83e1,kubernetes.io/config.seen: 2024-09-19T19:25:31.378580001Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-bst8x,Uid:165f4eae-fc28-4b50-b35f-f61f95d9872a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548840700855,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-f
c28-4b50-b35f-f61f95d9872a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:49.628297100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-076992,Uid:c1c4b85bfdfb554afca940fe6375dba9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548828126019,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c1c4b85bfdfb554afca940fe6375dba9,kubernetes.io/config.seen: 2024-09-19T19:25:31.378571935Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&P
odSandboxMetadata{Name:kindnet-j846w,Uid:cdccd08d-8a5d-4495-8ad3-5591de87862f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548823722785,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:35.645448663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-076992,Uid:3d5aa3049515e8c07c16189cb9b261d4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548821558401,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.173:8443,kubernetes.io/config.hash: 3d5aa3049515e8c07c16189cb9b261d4,kubernetes.io/config.seen: 2024-09-19T19:25:31.378578928Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7964879c-5097-490e-b1ba-dd41091ca283,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548815389556,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\
":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-19T19:25:49.629787866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbds4,Uid:89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774544634969900,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:49.620635006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6cb547b2-bffa-4afe-815c-185d6a64be78 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.388758122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e8598c6-cf6e-41fc-8745-76683cb0731b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.388831527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e8598c6-cf6e-41fc-8745-76683cb0731b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.389113691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726774630404511008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726774593449933479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726774593413510889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774582732602783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726774564362621411,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726774549595312188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726774549590216977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:cfb4ace0f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726774549543081171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b99891
5ace13c79db6455320b4234dac25c23d10d7757629b3f372,PodSandboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774549399831645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726774549303480080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774544774674022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort
\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e8598c6-cf6e-41fc-8745-76683cb0731b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.403184520Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34e637b5-a075-46c0-9f75-8a64c5736b16 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.403275184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34e637b5-a075-46c0-9f75-8a64c5736b16 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.404675882Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d4eb405-f131-47c9-aaf0-1693e7100467 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.408386584Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774909405164933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d4eb405-f131-47c9-aaf0-1693e7100467 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.412373760Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=1b797ab5-bbe0-4d32-bf3e-dd30fe75080c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.412676710Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-8wfb7,Uid:e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774582563941602,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:28:06.143892361Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-076992,Uid:22afd76430fe0849caa93fde9d59c02f,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726774564259159318,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{kubernetes.io/config.hash: 22afd76430fe0849caa93fde9d59c02f,kubernetes.io/config.seen: 2024-09-19T19:35:42.139429547Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&PodSandboxMetadata{Name:etcd-ha-076992,Uid:79b7783d18d62d18697a4d1aa0ff5755,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548885610793,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.173:2
379,kubernetes.io/config.hash: 79b7783d18d62d18697a4d1aa0ff5755,kubernetes.io/config.seen: 2024-09-19T19:25:31.378577774Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&PodSandboxMetadata{Name:kube-proxy-4d8dc,Uid:4d522b18-9ae7-46a9-a6c7-e1560a1822de,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548872048051,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:35.640844315Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-076992,Uid:b693200c7b44
d836573bbd57560a83e1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548871324426,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b693200c7b44d836573bbd57560a83e1,kubernetes.io/config.seen: 2024-09-19T19:25:31.378580001Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-bst8x,Uid:165f4eae-fc28-4b50-b35f-f61f95d9872a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548840700855,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-f
c28-4b50-b35f-f61f95d9872a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:49.628297100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-076992,Uid:c1c4b85bfdfb554afca940fe6375dba9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548828126019,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c1c4b85bfdfb554afca940fe6375dba9,kubernetes.io/config.seen: 2024-09-19T19:25:31.378571935Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&P
odSandboxMetadata{Name:kindnet-j846w,Uid:cdccd08d-8a5d-4495-8ad3-5591de87862f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548823722785,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:35.645448663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-076992,Uid:3d5aa3049515e8c07c16189cb9b261d4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548821558401,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.173:8443,kubernetes.io/config.hash: 3d5aa3049515e8c07c16189cb9b261d4,kubernetes.io/config.seen: 2024-09-19T19:25:31.378578928Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7964879c-5097-490e-b1ba-dd41091ca283,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774548815389556,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\
":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-19T19:25:49.629787866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbds4,Uid:89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726774544634969900,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:49.620635006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-8wfb7,Uid:e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726774086457559625,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:28:06.143892361Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-bst8x,Uid:165f4eae-fc28-4b50-b35f-f61f95d9872a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726773949949220893,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:49.628297100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbds4,Uid:89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726773949939234068,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:49.620635006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&PodSandboxMetadata{Name:kube-proxy-4d8dc,Uid:4d522b18-9ae7-46a9-a6c7-e1560a1822de,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726773937464283199,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:35.640844315Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&PodSandboxMetadata{Name:kindnet-j846w,Uid:cdccd08d-8a5d-4495-8ad3-5591de87862f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726773937453270806,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-19T19:25:35.645448663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&PodSandboxMetadata{Name:etcd-ha-076992,Uid:79b7783d18d62d18697a4d1aa0ff5755,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726773925030413752,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.173:2379,kubernetes.io/config.hash: 79b7783d18d62d18697a4d1aa0ff5755,kubernetes.io/config.seen: 2024-09-19T19:25:24.540181273Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-076992,Uid:c1c4b85bfdfb554afca940fe6375dba9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726773925018151522,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c1c4b85b
fdfb554afca940fe6375dba9,kubernetes.io/config.seen: 2024-09-19T19:25:24.540176900Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1b797ab5-bbe0-4d32-bf3e-dd30fe75080c name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.413073772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04979eca-e43b-4df5-bfd6-efb7436a95be name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.413134582Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04979eca-e43b-4df5-bfd6-efb7436a95be name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.413473652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726774630404511008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726774593449933479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726774593413510889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63df2e8772528c1c649ba71943a50c5a9584fc0c35d1e10002a0188afe543524,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726774588400631013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774582732602783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726774564362621411,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726774549595312188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726774549590216977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0
f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726774549543081171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db64
55320b4234dac25c23d10d7757629b3f372,PodSandboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774549399831645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726774549303480080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726774549256337526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726774549115414423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774544774674022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774089735408659,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950241312081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950179536093,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726773937822274967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726773937599657860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726773925364635092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726773925242908998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04979eca-e43b-4df5-bfd6-efb7436a95be name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.414100741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88e5237b-6dd8-4134-ab8a-c96163937a16 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.414274948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88e5237b-6dd8-4134-ab8a-c96163937a16 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.415093495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726774630404511008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726774593449933479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726774593413510889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63df2e8772528c1c649ba71943a50c5a9584fc0c35d1e10002a0188afe543524,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726774588400631013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774582732602783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726774564362621411,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726774549595312188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726774549590216977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0
f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726774549543081171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db64
55320b4234dac25c23d10d7757629b3f372,PodSandboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774549399831645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726774549303480080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726774549256337526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726774549115414423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774544774674022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774089735408659,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950241312081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950179536093,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726773937822274967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726773937599657860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726773925364635092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726773925242908998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88e5237b-6dd8-4134-ab8a-c96163937a16 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.459903880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14c26bcb-f5fb-41f5-8c2c-6e019e84cd62 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.460054493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14c26bcb-f5fb-41f5-8c2c-6e019e84cd62 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.461398067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=540b37a8-6a07-48aa-a8fb-0026e02b66d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.461792760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774909461769454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=540b37a8-6a07-48aa-a8fb-0026e02b66d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.462549168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b501376-b455-4a4b-8d22-0fdd4b92227f name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.462619130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b501376-b455-4a4b-8d22-0fdd4b92227f name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:41:49 ha-076992 crio[3621]: time="2024-09-19 19:41:49.463065675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726774630404511008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726774593449933479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726774593413510889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63df2e8772528c1c649ba71943a50c5a9584fc0c35d1e10002a0188afe543524,PodSandboxId:79d0bd128843b266fd83b62687958a4118b4d5a37b20d5fab14074720479b2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726774588400631013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726774582732602783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726774564362621411,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726774549595312188,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726774549590216977,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0
f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726774549543081171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db64
55320b4234dac25c23d10d7757629b3f372,PodSandboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774549399831645,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726774549303480080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8,PodSandboxId:db14226d4ecb0114aa52172a24df0b3015bce60ed353e3d594acd5899d24c6a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726774549256337526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c,PodSandboxId:afc4e7e19236b321f8784bb630b9ae6ffc8572a0b718cd51ff65fa5740682716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726774549115414423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726774544774674022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52db63dad4c31fff8ade222ec8ab3811aff7ad5ca17bf86a766d7a912ac420b5,PodSandboxId:a8aaf854df6415f56ecbec066b03a8fcf177091b1519fcf7b4961ef7d6d6a840,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774089735408659,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3,PodSandboxId:8583d1eda759fc07bd3e790d17da88f826395822f125fc9d9ec456745d14b92d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950241312081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0,PodSandboxId:d65bb54e4c4267cdd6dd8cec95dc7ae836ed5bc5fe916fe1f2730561fb9ac33d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726773950179536093,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4,PodSandboxId:0273544afdfa64c62aa5105788e8d44b5358a587f64ea98add80aa1d7c9c8cc5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726773937822274967,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2,PodSandboxId:2a6c6ac66a43446da341df37be24aec61d70452ae4513a157be57229a14c935e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726773937599657860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1,PodSandboxId:09b02f34308ada09fb4262fc5b96178040e55f02c219b56719c4491530210783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726773925364635092,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018,PodSandboxId:fc5737a4c0f5c0ed679701f7e3b0926f7fa43277ca0709a70c51ab414e907812,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726773925242908998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b501376-b455-4a4b-8d22-0fdd4b92227f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	004cf0a26efe0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   79d0bd128843b       storage-provisioner
	44e35509c3580       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Running             kube-controller-manager   2                   db14226d4ecb0       kube-controller-manager-ha-076992
	2e1f4501fff9a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Running             kube-apiserver            3                   afc4e7e19236b       kube-apiserver-ha-076992
	63df2e8772528       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   79d0bd128843b       storage-provisioner
	b1cfb43f1ef0c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   8772b407d7c25       busybox-7dff88458-8wfb7
	4526c50933cab       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   4f59647076dbb       kube-vip-ha-076992
	c412d5b70d043       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   8209dcfdd30b4       kube-proxy-4d8dc
	6e386f72e5d37       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   c194bf9cd1d21       kindnet-j846w
	cfb4ace0f3e59       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            1                   e9e69a1062cea       kube-scheduler-ha-076992
	b344ac64a2b99       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   1                   80031de6f8921       coredns-7c65d6cfc9-bst8x
	2810749ec6ddc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      1                   fb62ba74ee7f1       etcd-ha-076992
	262c164bf25b4       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Exited              kube-controller-manager   1                   db14226d4ecb0       kube-controller-manager-ha-076992
	d6a80e0201608       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Exited              kube-apiserver            2                   afc4e7e19236b       kube-apiserver-ha-076992
	611497be6a620       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   1                   257eb8bdca5fb       coredns-7c65d6cfc9-nbds4
	52db63dad4c31       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   a8aaf854df641       busybox-7dff88458-8wfb7
	17ef846dadbee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   8583d1eda759f       coredns-7c65d6cfc9-nbds4
	cbaa19f6b3857       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   d65bb54e4c426       coredns-7c65d6cfc9-bst8x
	d623b5f012d8a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      16 minutes ago      Exited              kindnet-cni               0                   0273544afdfa6       kindnet-j846w
	9d62ecb2cc70a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      16 minutes ago      Exited              kube-proxy                0                   2a6c6ac66a434       kube-proxy-4d8dc
	5745c8d186325       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   09b02f34308ad       kube-scheduler-ha-076992
	3beffc038ef33       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   fc5737a4c0f5c       etcd-ha-076992
	
	
	==> coredns [17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3] <==
	[INFO] 10.244.2.2:35304 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000093782s
	[INFO] 10.244.0.4:60710 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175542s
	[INFO] 10.244.0.4:56638 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002407779s
	[INFO] 10.244.1.2:60721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148724s
	[INFO] 10.244.2.2:40070 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138971s
	[INFO] 10.244.2.2:53394 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186542s
	[INFO] 10.244.2.2:54178 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000225634s
	[INFO] 10.244.2.2:53480 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001438271s
	[INFO] 10.244.2.2:48475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168626s
	[INFO] 10.244.2.2:49380 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160453s
	[INFO] 10.244.2.2:38326 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100289s
	[INFO] 10.244.1.2:47564 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107018s
	[INFO] 10.244.0.4:55521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119496s
	[INFO] 10.244.0.4:51830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118694s
	[INFO] 10.244.0.4:49301 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000181413s
	[INFO] 10.244.1.2:38961 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124955s
	[INFO] 10.244.1.2:37060 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092863s
	[INFO] 10.244.1.2:44024 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085892s
	[INFO] 10.244.2.2:35688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014156s
	[INFO] 10.244.2.2:33974 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170311s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1419069143]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 19:36:00.363) (total time: 10001ms):
	Trace[1419069143]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:36:10.364)
	Trace[1419069143]: [10.001580214s] [10.001580214s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50042->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50042->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b344ac64a2b998915ace13c79db6455320b4234dac25c23d10d7757629b3f372] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0] <==
	[INFO] 10.244.1.2:60797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218519s
	[INFO] 10.244.1.2:44944 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001794304s
	[INFO] 10.244.1.2:51111 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185225s
	[INFO] 10.244.1.2:46956 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160685s
	[INFO] 10.244.1.2:36318 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001321241s
	[INFO] 10.244.1.2:53158 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118134s
	[INFO] 10.244.1.2:45995 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102925s
	[INFO] 10.244.2.2:55599 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001757807s
	[INFO] 10.244.0.4:50520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118756s
	[INFO] 10.244.0.4:48294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189838s
	[INFO] 10.244.0.4:52710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005729s
	[INFO] 10.244.0.4:56525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085763s
	[INFO] 10.244.1.2:43917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168832s
	[INFO] 10.244.1.2:34972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200932s
	[INFO] 10.244.1.2:50680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181389s
	[INFO] 10.244.2.2:51430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152587s
	[INFO] 10.244.2.2:37924 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000317695s
	[INFO] 10.244.2.2:46377 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000371446s
	[INFO] 10.244.2.2:36790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012815s
	[INFO] 10.244.0.4:35196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000409388s
	[INFO] 10.244.1.2:43265 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235404s
	[INFO] 10.244.2.2:56515 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113892s
	[INFO] 10.244.2.2:33574 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251263s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-076992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T19_25_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:41:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:41:40 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:41:40 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:41:40 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:41:40 +0000   Thu, 19 Sep 2024 19:25:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    ha-076992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 88962b0779f84ff6915974a39d1a24ba
	  System UUID:                88962b07-79f8-4ff6-9159-74a39d1a24ba
	  Boot ID:                    f4736dd6-fd6e-4dc3-b2ee-64f8773325ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8wfb7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-bst8x             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-nbds4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-076992                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-j846w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-076992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-076992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-4d8dc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-076992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-076992                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m16s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-076992 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-076992 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-076992 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-076992 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Warning  ContainerGCFailed        6m18s (x2 over 7m18s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             6m6s (x3 over 6m55s)   kubelet          Node ha-076992 status is now: NodeNotReady
	  Normal   RegisteredNode           5m20s                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	
	
	Name:               ha-076992-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_26_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:26:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:41:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:37:15 +0000   Thu, 19 Sep 2024 19:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:37:15 +0000   Thu, 19 Sep 2024 19:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:37:15 +0000   Thu, 19 Sep 2024 19:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:37:15 +0000   Thu, 19 Sep 2024 19:36:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-076992-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fbb92a6f6fa49d49b42ed70b015086d
	  System UUID:                7fbb92a6-f6fa-49d4-9b42-ed70b015086d
	  Boot ID:                    0fe45e85-4f9b-481a-8bc8-b98a6c8a000b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c64rv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-076992-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-6d8pz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-076992-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-076992-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-tjtfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-076992-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-076992-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m53s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                    node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-076992-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-076992-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-076992-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-076992-m02 status is now: NodeNotReady
	  Normal  Starting                 5m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m43s)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m43s)  kubelet          Node ha-076992-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s (x7 over 5m43s)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	
	
	Name:               ha-076992-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_28_43_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:28:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:39:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 19 Sep 2024 19:39:02 +0000   Thu, 19 Sep 2024 19:40:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 19 Sep 2024 19:39:02 +0000   Thu, 19 Sep 2024 19:40:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 19 Sep 2024 19:39:02 +0000   Thu, 19 Sep 2024 19:40:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 19 Sep 2024 19:39:02 +0000   Thu, 19 Sep 2024 19:40:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-076992-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37704cd295b34d23a0864637f4482597
	  System UUID:                37704cd2-95b3-4d23-a086-4637f4482597
	  Boot ID:                    d8d01324-9af8-448e-92c0-f74eecf4a9a9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wdj7x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-8jqvd              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-8gt7w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-076992-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-076992-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-076992-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-076992-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m20s                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   NodeNotReady             4m39s                  node-controller  Node ha-076992-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-076992-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-076992-m04 has been rebooted, boot id: d8d01324-9af8-448e-92c0-f74eecf4a9a9
	  Normal   NodeReady                2m47s                  kubelet          Node ha-076992-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-076992-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.418534] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.061113] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050106] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.181483] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.133235] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.281192] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.948588] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +3.762419] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.059014] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.974334] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.083682] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344336] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.503085] kauditd_printk_skb: 38 callbacks suppressed
	[Sep19 19:26] kauditd_printk_skb: 26 callbacks suppressed
	[Sep19 19:35] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +0.145564] systemd-fstab-generator[3559]: Ignoring "noauto" option for root device
	[  +0.177187] systemd-fstab-generator[3573]: Ignoring "noauto" option for root device
	[  +0.146656] systemd-fstab-generator[3585]: Ignoring "noauto" option for root device
	[  +0.269791] systemd-fstab-generator[3613]: Ignoring "noauto" option for root device
	[  +5.037197] systemd-fstab-generator[3707]: Ignoring "noauto" option for root device
	[  +0.092071] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.480192] kauditd_printk_skb: 22 callbacks suppressed
	[Sep19 19:36] kauditd_printk_skb: 87 callbacks suppressed
	[  +9.057023] kauditd_printk_skb: 1 callbacks suppressed
	[ +36.276079] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5] <==
	{"level":"info","ts":"2024-09-19T19:38:23.677553Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:38:23.700466Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"db356cbc19811e0e","to":"a2ed4c579ed15809","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-19T19:38:23.700548Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:38:23.711555Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"db356cbc19811e0e","to":"a2ed4c579ed15809","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-19T19:38:23.711635Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"warn","ts":"2024-09-19T19:38:25.169427Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a2ed4c579ed15809","rtt":"0s","error":"dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:38:25.169485Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a2ed4c579ed15809","rtt":"0s","error":"dial tcp 192.168.39.66:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-19T19:39:16.024951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e switched to configuration voters=(42107204596178615 15795650823209426446)"}
	{"level":"info","ts":"2024-09-19T19:39:16.027857Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"a25ac6d8ed10a2a9","local-member-id":"db356cbc19811e0e","removed-remote-peer-id":"a2ed4c579ed15809","removed-remote-peer-urls":["https://192.168.39.66:2380"]}
	{"level":"info","ts":"2024-09-19T19:39:16.028127Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"warn","ts":"2024-09-19T19:39:16.028416Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:39:16.028476Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"warn","ts":"2024-09-19T19:39:16.028846Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:39:16.028913Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:39:16.029061Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"warn","ts":"2024-09-19T19:39:16.029292Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809","error":"context canceled"}
	{"level":"warn","ts":"2024-09-19T19:39:16.029365Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"a2ed4c579ed15809","error":"failed to read a2ed4c579ed15809 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-19T19:39:16.029425Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"warn","ts":"2024-09-19T19:39:16.029682Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809","error":"context canceled"}
	{"level":"info","ts":"2024-09-19T19:39:16.029746Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:39:16.029827Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:39:16.029864Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"db356cbc19811e0e","removed-remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:39:16.029928Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"db356cbc19811e0e","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"a2ed4c579ed15809"}
	{"level":"warn","ts":"2024-09-19T19:39:16.054032Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"db356cbc19811e0e","remote-peer-id-stream-handler":"db356cbc19811e0e","remote-peer-id-from":"a2ed4c579ed15809"}
	{"level":"warn","ts":"2024-09-19T19:39:16.061857Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"db356cbc19811e0e","remote-peer-id-stream-handler":"db356cbc19811e0e","remote-peer-id-from":"a2ed4c579ed15809"}
	
	
	==> etcd [3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018] <==
	{"level":"warn","ts":"2024-09-19T19:34:05.000685Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:33:57.984059Z","time spent":"7.016617352s","remote":"127.0.0.1:50258","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	2024/09/19 19:34:05 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-19T19:34:05.059090Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.173:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T19:34:05.059148Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.173:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-19T19:34:05.059229Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"db356cbc19811e0e","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-19T19:34:05.059414Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059450Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059475Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059572Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059747Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059827Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059857Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:34:05.059881Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.059909Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.059948Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.060101Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.060155Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.060201Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.060229Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a2ed4c579ed15809"}
	{"level":"info","ts":"2024-09-19T19:34:05.063297Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.173:2380"}
	{"level":"warn","ts":"2024-09-19T19:34:05.063419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.459547423s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-19T19:34:05.063462Z","caller":"traceutil/trace.go:171","msg":"trace[1076552976] range","detail":"{range_begin:; range_end:; }","duration":"1.459606135s","start":"2024-09-19T19:34:03.603849Z","end":"2024-09-19T19:34:05.063455Z","steps":["trace[1076552976] 'agreement among raft nodes before linearized reading'  (duration: 1.459545565s)"],"step_count":1}
	{"level":"error","ts":"2024-09-19T19:34:05.063513Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-19T19:34:05.063767Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.173:2380"}
	{"level":"info","ts":"2024-09-19T19:34:05.063803Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-076992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.173:2380"],"advertise-client-urls":["https://192.168.39.173:2379"]}
	
	
	==> kernel <==
	 19:41:50 up 16 min,  0 users,  load average: 0.38, 0.41, 0.29
	Linux ha-076992 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3] <==
	I0919 19:41:00.788688       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:41:10.788254       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:41:10.788382       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:41:10.788522       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:41:10.788546       1 main.go:299] handling current node
	I0919 19:41:10.788569       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:41:10.788584       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:41:20.788207       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:41:20.788306       1 main.go:299] handling current node
	I0919 19:41:20.788334       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:41:20.788351       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:41:20.788482       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:41:20.788502       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:41:30.789295       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:41:30.789541       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:41:30.789801       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:41:30.789862       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:41:30.790055       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:41:30.790082       1 main.go:299] handling current node
	I0919 19:41:40.788247       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:41:40.788344       1 main.go:299] handling current node
	I0919 19:41:40.788358       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:41:40.788363       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:41:40.788527       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:41:40.788551       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4] <==
	I0919 19:33:29.295959       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:33:39.295184       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:33:39.295230       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:33:39.295411       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:33:39.295476       1 main.go:299] handling current node
	I0919 19:33:39.295488       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:33:39.295493       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:33:39.295557       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:33:39.295579       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:33:49.295156       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:33:49.296241       1 main.go:299] handling current node
	I0919 19:33:49.296276       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:33:49.296295       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:33:49.296618       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:33:49.296661       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:33:49.296747       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:33:49.296766       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:33:59.295132       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:33:59.295211       1 main.go:299] handling current node
	I0919 19:33:59.295224       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:33:59.295231       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:33:59.295441       1 main.go:295] Handling node with IPs: map[192.168.39.66:{}]
	I0919 19:33:59.295467       1 main.go:322] Node ha-076992-m03 has CIDR [10.244.2.0/24] 
	I0919 19:33:59.295512       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:33:59.295518       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81] <==
	I0919 19:36:35.824673       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0919 19:36:35.837452       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 19:36:35.837944       1 policy_source.go:224] refreshing policies
	I0919 19:36:35.849358       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 19:36:35.849409       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0919 19:36:35.850496       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 19:36:35.851296       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0919 19:36:35.851329       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0919 19:36:35.851431       1 shared_informer.go:320] Caches are synced for configmaps
	I0919 19:36:35.852209       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0919 19:36:35.856173       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0919 19:36:35.856256       1 aggregator.go:171] initial CRD sync complete...
	I0919 19:36:35.856277       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 19:36:35.856283       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 19:36:35.856287       1 cache.go:39] Caches are synced for autoregister controller
	I0919 19:36:35.857285       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0919 19:36:35.863397       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.232 192.168.39.66]
	I0919 19:36:35.864740       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 19:36:35.871148       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0919 19:36:35.873921       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0919 19:36:35.937513       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 19:36:36.757747       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 19:36:37.192835       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.173 192.168.39.232 192.168.39.66]
	W0919 19:36:47.188227       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.173 192.168.39.232]
	W0919 19:39:27.198042       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.173 192.168.39.232]
	
	
	==> kube-apiserver [d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c] <==
	I0919 19:35:49.660318       1 options.go:228] external host was not specified, using 192.168.39.173
	I0919 19:35:49.674410       1 server.go:142] Version: v1.31.1
	I0919 19:35:49.674455       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:35:50.391038       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0919 19:35:50.403080       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 19:35:50.405606       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0919 19:35:50.405693       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0919 19:35:50.405948       1 instance.go:232] Using reconciler: lease
	W0919 19:36:10.392166       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0919 19:36:10.392220       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0919 19:36:10.409238       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0919 19:36:10.409357       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8] <==
	I0919 19:35:50.908423       1 serving.go:386] Generated self-signed cert in-memory
	I0919 19:35:51.291883       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0919 19:35:51.292124       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:35:51.294092       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 19:35:51.294354       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 19:35:51.294895       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0919 19:35:51.295062       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0919 19:36:11.416373       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.173:8443/healthz\": dial tcp 192.168.39.173:8443: connect: connection refused"
	
	
	==> kube-controller-manager [44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c] <==
	I0919 19:39:12.875565       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.926µs"
	I0919 19:39:14.749925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.058µs"
	I0919 19:39:15.756835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.628µs"
	I0919 19:39:15.766773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.289µs"
	I0919 19:39:17.112941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.570405ms"
	I0919 19:39:17.113810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.363µs"
	I0919 19:39:27.006243       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076992-m04"
	I0919 19:39:27.006739       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m03"
	E0919 19:39:39.171654       1 gc_controller.go:151] "Failed to get node" err="node \"ha-076992-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076992-m03"
	E0919 19:39:39.171708       1 gc_controller.go:151] "Failed to get node" err="node \"ha-076992-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076992-m03"
	E0919 19:39:39.171716       1 gc_controller.go:151] "Failed to get node" err="node \"ha-076992-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076992-m03"
	E0919 19:39:39.171725       1 gc_controller.go:151] "Failed to get node" err="node \"ha-076992-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076992-m03"
	E0919 19:39:39.171732       1 gc_controller.go:151] "Failed to get node" err="node \"ha-076992-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076992-m03"
	E0919 19:39:59.171910       1 gc_controller.go:151] "Failed to get node" err="node \"ha-076992-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076992-m03"
	E0919 19:39:59.171936       1 gc_controller.go:151] "Failed to get node" err="node \"ha-076992-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076992-m03"
	E0919 19:39:59.171942       1 gc_controller.go:151] "Failed to get node" err="node \"ha-076992-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076992-m03"
	E0919 19:39:59.171947       1 gc_controller.go:151] "Failed to get node" err="node \"ha-076992-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076992-m03"
	E0919 19:39:59.171951       1 gc_controller.go:151] "Failed to get node" err="node \"ha-076992-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076992-m03"
	I0919 19:40:04.248326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:40:04.269092       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:40:04.363461       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.223639ms"
	I0919 19:40:04.363542       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.325µs"
	I0919 19:40:05.352131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:40:09.400187       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:41:40.213574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992"
	
	
	==> kube-proxy [9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2] <==
	E0919 19:32:59.926641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:32:59.926736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:32:59.926780       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:02.995893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:02.996046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:02.996268       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:02.996368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:06.068640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:06.069207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:09.139499       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:09.139570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:09.139657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:09.139673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:18.357196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:18.357381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:21.427553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:21.428382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:21.429880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:21.429950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:42.933306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:42.933536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&resourceVersion=1716\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:46.004859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:46.005120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1707\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:33:46.005531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:33:46.005734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1715\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 19:35:51.955971       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:35:55.029038       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:35:58.100556       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:36:04.246747       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:36:16.531671       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0919 19:36:33.434609       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.173"]
	E0919 19:36:33.442335       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:36:33.526674       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 19:36:33.527103       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 19:36:33.527381       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:36:33.533680       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:36:33.534387       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:36:33.534496       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:36:33.538133       1 config.go:199] "Starting service config controller"
	I0919 19:36:33.538362       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:36:33.541156       1 config.go:328] "Starting node config controller"
	I0919 19:36:33.543065       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:36:33.540804       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:36:33.547059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:36:33.653079       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 19:36:33.653127       1 shared_informer.go:320] Caches are synced for service config
	I0919 19:36:33.653246       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1] <==
	I0919 19:25:32.657764       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 19:28:06.097590       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jl6lr\": pod busybox-7dff88458-jl6lr is already assigned to node \"ha-076992-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jl6lr" node="ha-076992-m03"
	E0919 19:28:06.098198       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3f7ee95d-11f9-4073-8fa9-d4aa5fc08d99(default/busybox-7dff88458-jl6lr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jl6lr"
	E0919 19:28:06.098359       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jl6lr\": pod busybox-7dff88458-jl6lr is already assigned to node \"ha-076992-m03\"" pod="default/busybox-7dff88458-jl6lr"
	I0919 19:28:06.098540       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jl6lr" node="ha-076992-m03"
	E0919 19:28:06.176510       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8wfb7\": pod busybox-7dff88458-8wfb7 is already assigned to node \"ha-076992\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8wfb7" node="ha-076992"
	E0919 19:28:06.176725       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e9e5cd58-874f-41c6-8c0a-d37b5101a1f9(default/busybox-7dff88458-8wfb7) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8wfb7"
	E0919 19:28:06.181327       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8wfb7\": pod busybox-7dff88458-8wfb7 is already assigned to node \"ha-076992\"" pod="default/busybox-7dff88458-8wfb7"
	I0919 19:28:06.181857       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8wfb7" node="ha-076992"
	E0919 19:33:52.923314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0919 19:33:53.362928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0919 19:33:53.541834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0919 19:33:53.999402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0919 19:33:54.440532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0919 19:33:55.406824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0919 19:33:55.449844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0919 19:33:57.288297       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0919 19:33:58.181022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0919 19:33:59.711856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0919 19:34:00.368470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0919 19:34:00.983401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0919 19:34:01.252059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0919 19:34:01.432147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0919 19:34:01.632427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0919 19:34:04.973856       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cfb4ace0f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79] <==
	W0919 19:36:27.942935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.173:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:27.943129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.173:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:28.949196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.173:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:28.949299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.173:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.146329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.173:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.146436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.173:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.196546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.173:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.196612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.173:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.396468       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.173:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.396513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.173:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.925771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.173:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.926086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.173:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:30.435838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.173:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:30.436058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.173:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:32.617798       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.173:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:32.617869       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.173:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:33.195606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.173:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:33.195731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.173:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:35.776364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 19:36:35.776452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 19:36:48.923565       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 19:39:12.713055       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wdj7x\": pod busybox-7dff88458-wdj7x is already assigned to node \"ha-076992-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wdj7x" node="ha-076992-m04"
	E0919 19:39:12.713392       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 25081f3e-a225-4436-852b-4fe81857e092(default/busybox-7dff88458-wdj7x) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wdj7x"
	E0919 19:39:12.713493       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wdj7x\": pod busybox-7dff88458-wdj7x is already assigned to node \"ha-076992-m04\"" pod="default/busybox-7dff88458-wdj7x"
	I0919 19:39:12.713626       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wdj7x" node="ha-076992-m04"
	
	
	==> kubelet <==
	Sep 19 19:40:31 ha-076992 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 19:40:31 ha-076992 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 19:40:31 ha-076992 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 19:40:31 ha-076992 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 19:40:31 ha-076992 kubelet[1304]: E0919 19:40:31.681859    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774831681643001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:40:31 ha-076992 kubelet[1304]: E0919 19:40:31.681900    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774831681643001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:40:41 ha-076992 kubelet[1304]: E0919 19:40:41.685245    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774841683737253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:40:41 ha-076992 kubelet[1304]: E0919 19:40:41.685578    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774841683737253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:40:51 ha-076992 kubelet[1304]: E0919 19:40:51.687094    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774851686812875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:40:51 ha-076992 kubelet[1304]: E0919 19:40:51.687361    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774851686812875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:41:01 ha-076992 kubelet[1304]: E0919 19:41:01.689302    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774861688972717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:41:01 ha-076992 kubelet[1304]: E0919 19:41:01.689341    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774861688972717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:41:11 ha-076992 kubelet[1304]: E0919 19:41:11.692450    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774871691902731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:41:11 ha-076992 kubelet[1304]: E0919 19:41:11.692490    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774871691902731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:41:21 ha-076992 kubelet[1304]: E0919 19:41:21.694184    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774881693897203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:41:21 ha-076992 kubelet[1304]: E0919 19:41:21.694207    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774881693897203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:41:31 ha-076992 kubelet[1304]: E0919 19:41:31.404488    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 19 19:41:31 ha-076992 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 19:41:31 ha-076992 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 19:41:31 ha-076992 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 19:41:31 ha-076992 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 19:41:31 ha-076992 kubelet[1304]: E0919 19:41:31.697929    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774891697333147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:41:31 ha-076992 kubelet[1304]: E0919 19:41:31.698023    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774891697333147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:41:41 ha-076992 kubelet[1304]: E0919 19:41:41.699579    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774901699164015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:41:41 ha-076992 kubelet[1304]: E0919 19:41:41.699604    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726774901699164015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:41:49.026282   38265 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19664-7917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-076992 -n ha-076992
helpers_test.go:261: (dbg) Run:  kubectl --context ha-076992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (781.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-076992 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0919 19:43:59.334705   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:45:22.404868   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:59.334672   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:53:59.334669   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-076992 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: signal: killed (12m59.487184311s)

                                                
                                                
-- stdout --
	* [ha-076992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-076992" primary control-plane node in "ha-076992" cluster
	* Updating the running kvm2 "ha-076992" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-076992-m02" control-plane node in "ha-076992" cluster
	* Updating the running kvm2 "ha-076992-m02" VM ...
	* Found network options:
	  - NO_PROXY=192.168.39.173
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.173
	* Verifying Kubernetes components...
	
	* Starting "ha-076992-m04" worker node in "ha-076992" cluster
	* Restarting existing kvm2 VM for "ha-076992-m04" ...
	* Found network options:
	  - NO_PROXY=192.168.39.173,192.168.39.232
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.173
	  - env NO_PROXY=192.168.39.173,192.168.39.232
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:41:51.065996   38345 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:41:51.066107   38345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:41:51.066117   38345 out.go:358] Setting ErrFile to fd 2...
	I0919 19:41:51.066121   38345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:41:51.066337   38345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:41:51.066893   38345 out.go:352] Setting JSON to false
	I0919 19:41:51.067934   38345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5055,"bootTime":1726769856,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:41:51.068027   38345 start.go:139] virtualization: kvm guest
	I0919 19:41:51.070625   38345 out.go:177] * [ha-076992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:41:51.072277   38345 notify.go:220] Checking for updates...
	I0919 19:41:51.072286   38345 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:41:51.073688   38345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:41:51.074995   38345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:41:51.076433   38345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:41:51.077635   38345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:41:51.079187   38345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:41:51.081044   38345 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:41:51.081593   38345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:41:51.081643   38345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:41:51.097008   38345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0919 19:41:51.097421   38345 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:41:51.098014   38345 main.go:141] libmachine: Using API Version  1
	I0919 19:41:51.098046   38345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:41:51.098359   38345 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:41:51.098548   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:41:51.098768   38345 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:41:51.099069   38345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:41:51.099137   38345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:41:51.114319   38345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I0919 19:41:51.114751   38345 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:41:51.115271   38345 main.go:141] libmachine: Using API Version  1
	I0919 19:41:51.115291   38345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:41:51.115587   38345 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:41:51.115726   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:41:51.153376   38345 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 19:41:51.154661   38345 start.go:297] selected driver: kvm2
	I0919 19:41:51.154673   38345 start.go:901] validating driver "kvm2" against &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:41:51.154816   38345 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:41:51.155108   38345 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:41:51.155174   38345 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 19:41:51.171058   38345 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 19:41:51.171779   38345 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:41:51.171814   38345 cni.go:84] Creating CNI manager for ""
	I0919 19:41:51.171850   38345 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 19:41:51.171911   38345 start.go:340] cluster config:
	{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:41:51.172071   38345 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:41:51.174030   38345 out.go:177] * Starting "ha-076992" primary control-plane node in "ha-076992" cluster
	I0919 19:41:51.175585   38345 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:41:51.175623   38345 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 19:41:51.175641   38345 cache.go:56] Caching tarball of preloaded images
	I0919 19:41:51.175721   38345 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:41:51.175730   38345 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:41:51.175840   38345 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:41:51.176023   38345 start.go:360] acquireMachinesLock for ha-076992: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:41:51.176074   38345 start.go:364] duration metric: took 33.672µs to acquireMachinesLock for "ha-076992"
	I0919 19:41:51.176090   38345 start.go:96] Skipping create...Using existing machine configuration
	I0919 19:41:51.176097   38345 fix.go:54] fixHost starting: 
	I0919 19:41:51.176345   38345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:41:51.176373   38345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:41:51.191004   38345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0919 19:41:51.191479   38345 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:41:51.191945   38345 main.go:141] libmachine: Using API Version  1
	I0919 19:41:51.191974   38345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:41:51.192308   38345 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:41:51.192513   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:41:51.192661   38345 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:41:51.194316   38345 fix.go:112] recreateIfNeeded on ha-076992: state=Running err=<nil>
	W0919 19:41:51.194351   38345 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 19:41:51.197237   38345 out.go:177] * Updating the running kvm2 "ha-076992" VM ...
	I0919 19:41:51.198347   38345 machine.go:93] provisionDockerMachine start ...
	I0919 19:41:51.198365   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:41:51.198572   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:51.201289   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.201659   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:51.201681   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.201879   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:41:51.202027   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.202158   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.202295   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:41:51.202432   38345 main.go:141] libmachine: Using SSH client type: native
	I0919 19:41:51.202610   38345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:41:51.202622   38345 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 19:41:51.306779   38345 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:41:51.306814   38345 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:41:51.307060   38345 buildroot.go:166] provisioning hostname "ha-076992"
	I0919 19:41:51.307098   38345 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:41:51.307349   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:51.310467   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.310932   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:51.310960   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.311189   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:41:51.311371   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.311537   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.311694   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:41:51.311864   38345 main.go:141] libmachine: Using SSH client type: native
	I0919 19:41:51.312087   38345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:41:51.312105   38345 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992 && echo "ha-076992" | sudo tee /etc/hostname
	I0919 19:41:51.431192   38345 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:41:51.431217   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:51.433945   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.434313   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:51.434340   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.434518   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:41:51.434687   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.434821   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.434938   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:41:51.435095   38345 main.go:141] libmachine: Using SSH client type: native
	I0919 19:41:51.435266   38345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:41:51.435279   38345 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:41:51.538363   38345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:41:51.538393   38345 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:41:51.538416   38345 buildroot.go:174] setting up certificates
	I0919 19:41:51.538429   38345 provision.go:84] configureAuth start
	I0919 19:41:51.538441   38345 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:41:51.538759   38345 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:41:51.541627   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.541958   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:51.541985   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.542077   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:51.544636   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.544979   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:51.545005   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.545156   38345 provision.go:143] copyHostCerts
	I0919 19:41:51.545192   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:41:51.545245   38345 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:41:51.545260   38345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:41:51.545341   38345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:41:51.545474   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:41:51.545503   38345 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:41:51.545512   38345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:41:51.545551   38345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:41:51.545612   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:41:51.545636   38345 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:41:51.545644   38345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:41:51.545681   38345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:41:51.545742   38345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992 san=[127.0.0.1 192.168.39.173 ha-076992 localhost minikube]
	I0919 19:41:52.108598   38345 provision.go:177] copyRemoteCerts
	I0919 19:41:52.108670   38345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:41:52.108698   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:52.111307   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:52.111637   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:52.111669   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:52.111801   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:41:52.111977   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:52.112116   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:41:52.112276   38345 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:41:52.194360   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:41:52.194435   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:41:52.225736   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:41:52.225840   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 19:41:52.252627   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:41:52.252705   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 19:41:52.280715   38345 provision.go:87] duration metric: took 742.273973ms to configureAuth
	I0919 19:41:52.280742   38345 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:41:52.280960   38345 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:41:52.281037   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:52.284092   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:52.284397   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:52.284425   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:52.284561   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:41:52.284749   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:52.284902   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:52.285009   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:41:52.285193   38345 main.go:141] libmachine: Using SSH client type: native
	I0919 19:41:52.285360   38345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:41:52.285375   38345 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:43:27.085606   38345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:43:27.085636   38345 machine.go:96] duration metric: took 1m35.887277351s to provisionDockerMachine
	I0919 19:43:27.085648   38345 start.go:293] postStartSetup for "ha-076992" (driver="kvm2")
	I0919 19:43:27.085658   38345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:43:27.085676   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.085961   38345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:43:27.085988   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:43:27.089131   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.089558   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.089572   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.089744   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:43:27.089878   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.089982   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:43:27.090056   38345 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:43:27.173159   38345 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:43:27.177958   38345 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:43:27.177990   38345 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:43:27.178063   38345 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:43:27.178162   38345 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:43:27.178178   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:43:27.178303   38345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:43:27.188127   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:43:27.212965   38345 start.go:296] duration metric: took 127.302484ms for postStartSetup
	I0919 19:43:27.213014   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.213350   38345 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0919 19:43:27.213404   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:43:27.216215   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.216646   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.216668   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.216881   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:43:27.217049   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.217251   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:43:27.217392   38345 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	W0919 19:43:27.300993   38345 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0919 19:43:27.301024   38345 fix.go:56] duration metric: took 1m36.124925788s for fixHost
	I0919 19:43:27.301050   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:43:27.303604   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.303914   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.303948   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.304085   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:43:27.304291   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.304461   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.304609   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:43:27.304760   38345 main.go:141] libmachine: Using SSH client type: native
	I0919 19:43:27.304919   38345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:43:27.304928   38345 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:43:27.406310   38345 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726775007.369599564
	
	I0919 19:43:27.406347   38345 fix.go:216] guest clock: 1726775007.369599564
	I0919 19:43:27.406360   38345 fix.go:229] Guest: 2024-09-19 19:43:27.369599564 +0000 UTC Remote: 2024-09-19 19:43:27.3010336 +0000 UTC m=+96.269376252 (delta=68.565964ms)
	I0919 19:43:27.406423   38345 fix.go:200] guest clock delta is within tolerance: 68.565964ms
	I0919 19:43:27.406439   38345 start.go:83] releasing machines lock for "ha-076992", held for 1m36.230345045s
	I0919 19:43:27.406469   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.406787   38345 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:43:27.409673   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.410176   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.410198   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.410433   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.410956   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.411101   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.411197   38345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:43:27.411251   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:43:27.411385   38345 ssh_runner.go:195] Run: cat /version.json
	I0919 19:43:27.411413   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:43:27.413881   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.413914   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.414272   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.414297   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.414363   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.414397   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.414459   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:43:27.414556   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:43:27.414635   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.414705   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.414761   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:43:27.414879   38345 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:43:27.414893   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:43:27.415019   38345 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:43:27.522678   38345 ssh_runner.go:195] Run: systemctl --version
	I0919 19:43:27.530229   38345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:43:27.705047   38345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:43:27.714139   38345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:43:27.714207   38345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:43:27.732607   38345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 19:43:27.732635   38345 start.go:495] detecting cgroup driver to use...
	I0919 19:43:27.732707   38345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:43:27.757103   38345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:43:27.775322   38345 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:43:27.775381   38345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:43:27.791966   38345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:43:27.808063   38345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:43:27.978278   38345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:43:28.129140   38345 docker.go:233] disabling docker service ...
	I0919 19:43:28.129221   38345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:43:28.147509   38345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:43:28.161825   38345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:43:28.311156   38345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:43:28.463044   38345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:43:28.480233   38345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:43:28.501653   38345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:43:28.501709   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.513925   38345 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:43:28.513993   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.524367   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.535580   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.546452   38345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:43:28.557258   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.568931   38345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.580957   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.591745   38345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:43:28.601246   38345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:43:28.610671   38345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:43:28.761786   38345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:45:03.031020   38345 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.269187268s)
	I0919 19:45:03.031054   38345 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:45:03.031114   38345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:45:03.042480   38345 start.go:563] Will wait 60s for crictl version
	I0919 19:45:03.042557   38345 ssh_runner.go:195] Run: which crictl
	I0919 19:45:03.047092   38345 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:45:03.087780   38345 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:45:03.087883   38345 ssh_runner.go:195] Run: crio --version
	I0919 19:45:03.119120   38345 ssh_runner.go:195] Run: crio --version
	I0919 19:45:03.150532   38345 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:45:03.152024   38345 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:45:03.154995   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:45:03.155411   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:45:03.155442   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:45:03.155707   38345 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:45:03.160775   38345 kubeadm.go:883] updating cluster {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 19:45:03.160962   38345 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:45:03.161031   38345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:45:03.212758   38345 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:45:03.212781   38345 crio.go:433] Images already preloaded, skipping extraction
	I0919 19:45:03.212830   38345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:45:03.248443   38345 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:45:03.248466   38345 cache_images.go:84] Images are preloaded, skipping loading
	I0919 19:45:03.248474   38345 kubeadm.go:934] updating node { 192.168.39.173 8443 v1.31.1 crio true true} ...
	I0919 19:45:03.248577   38345 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:45:03.248655   38345 ssh_runner.go:195] Run: crio config
	I0919 19:45:03.297116   38345 cni.go:84] Creating CNI manager for ""
	I0919 19:45:03.297139   38345 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 19:45:03.297149   38345 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 19:45:03.297173   38345 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076992 NodeName:ha-076992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 19:45:03.297345   38345 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 19:45:03.297385   38345 kube-vip.go:115] generating kube-vip config ...
	I0919 19:45:03.297438   38345 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:45:03.309280   38345 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:45:03.309422   38345 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:45:03.309493   38345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:45:03.319454   38345 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 19:45:03.319529   38345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 19:45:03.329560   38345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0919 19:45:03.347142   38345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:45:03.364688   38345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0919 19:45:03.382364   38345 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:45:03.401000   38345 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:45:03.405303   38345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:45:03.574039   38345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:45:03.592321   38345 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.173
	I0919 19:45:03.592346   38345 certs.go:194] generating shared ca certs ...
	I0919 19:45:03.592365   38345 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:45:03.592560   38345 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:45:03.592606   38345 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:45:03.592616   38345 certs.go:256] generating profile certs ...
	I0919 19:45:03.592697   38345 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:45:03.592722   38345 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.ac18dafb
	I0919 19:45:03.592736   38345 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.ac18dafb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.232 192.168.39.254]
	I0919 19:45:03.716352   38345 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.ac18dafb ...
	I0919 19:45:03.716381   38345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.ac18dafb: {Name:mk624593ea726a4612aef684462b753c5c1d410c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:45:03.716562   38345 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.ac18dafb ...
	I0919 19:45:03.716575   38345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.ac18dafb: {Name:mkae76db79ccd8bfba7b2fba484d92ae36183f4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:45:03.716649   38345 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.ac18dafb -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:45:03.716799   38345 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.ac18dafb -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:45:03.716928   38345 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:45:03.716943   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:45:03.716956   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:45:03.716969   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:45:03.716982   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:45:03.716993   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:45:03.717007   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:45:03.717024   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:45:03.717037   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:45:03.717119   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:45:03.717176   38345 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:45:03.717187   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:45:03.717212   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:45:03.717234   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:45:03.717259   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:45:03.717299   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:45:03.717325   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:45:03.717340   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:45:03.717352   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:45:03.717895   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:45:03.744350   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:45:03.768605   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:45:03.793076   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:45:03.817444   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 19:45:03.843671   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 19:45:03.868983   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:45:03.895503   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:45:03.920257   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:45:03.945863   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:45:03.975789   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:45:04.002459   38345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 19:45:04.022325   38345 ssh_runner.go:195] Run: openssl version
	I0919 19:45:04.029250   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:45:04.041139   38345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:45:04.045653   38345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:45:04.045698   38345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:45:04.051653   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:45:04.062051   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:45:04.073188   38345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:45:04.077668   38345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:45:04.077733   38345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:45:04.083710   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:45:04.093610   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:45:04.105082   38345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:45:04.110423   38345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:45:04.110495   38345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:45:04.116561   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:45:04.127777   38345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:45:04.133608   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 19:45:04.140345   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 19:45:04.146685   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 19:45:04.152617   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 19:45:04.158387   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 19:45:04.164448   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 19:45:04.170533   38345 kubeadm.go:392] StartCluster: {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:45:04.170649   38345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 19:45:04.170691   38345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 19:45:04.209807   38345 cri.go:89] found id: "004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0"
	I0919 19:45:04.209838   38345 cri.go:89] found id: "44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c"
	I0919 19:45:04.209845   38345 cri.go:89] found id: "2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81"
	I0919 19:45:04.209849   38345 cri.go:89] found id: "63df2e8772528c1c649ba71943a50c5a9584fc0c35d1e10002a0188afe543524"
	I0919 19:45:04.209853   38345 cri.go:89] found id: "4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00"
	I0919 19:45:04.209858   38345 cri.go:89] found id: "c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730"
	I0919 19:45:04.209862   38345 cri.go:89] found id: "6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3"
	I0919 19:45:04.209866   38345 cri.go:89] found id: "cfb4ace0f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79"
	I0919 19:45:04.209870   38345 cri.go:89] found id: "b344ac64a2b998915ace13c79db6455320b4234dac25c23d10d7757629b3f372"
	I0919 19:45:04.209877   38345 cri.go:89] found id: "2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5"
	I0919 19:45:04.209881   38345 cri.go:89] found id: "262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8"
	I0919 19:45:04.209885   38345 cri.go:89] found id: "d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c"
	I0919 19:45:04.209889   38345 cri.go:89] found id: "611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c"
	I0919 19:45:04.209907   38345 cri.go:89] found id: "17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3"
	I0919 19:45:04.209915   38345 cri.go:89] found id: "cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0"
	I0919 19:45:04.209918   38345 cri.go:89] found id: "d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4"
	I0919 19:45:04.209922   38345 cri.go:89] found id: "9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2"
	I0919 19:45:04.209926   38345 cri.go:89] found id: "5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1"
	I0919 19:45:04.209928   38345 cri.go:89] found id: "3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018"
	I0919 19:45:04.209931   38345 cri.go:89] found id: ""
	I0919 19:45:04.209972   38345 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-076992 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-076992 -n ha-076992
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-076992 logs -n 25: (1.703466651s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m04 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp testdata/cp-test.txt                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992:/home/docker/cp-test_ha-076992-m04_ha-076992.txt                       |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992 sudo cat                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992.txt                                 |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m02:/home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m02 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m03:/home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n                                                                 | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | ha-076992-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-076992 ssh -n ha-076992-m03 sudo cat                                          | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC | 19 Sep 24 19:29 UTC |
	|         | /home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-076992 node stop m02 -v=7                                                     | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-076992 node start m02 -v=7                                                    | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-076992 -v=7                                                           | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-076992 -v=7                                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-076992 --wait=true -v=7                                                    | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:34 UTC | 19 Sep 24 19:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-076992                                                                | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:39 UTC |                     |
	| node    | ha-076992 node delete m03 -v=7                                                   | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:39 UTC | 19 Sep 24 19:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-076992 stop -v=7                                                              | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-076992 --wait=true                                                         | ha-076992 | jenkins | v1.34.0 | 19 Sep 24 19:41 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 19:41:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 19:41:51.065996   38345 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:41:51.066107   38345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:41:51.066117   38345 out.go:358] Setting ErrFile to fd 2...
	I0919 19:41:51.066121   38345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:41:51.066337   38345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:41:51.066893   38345 out.go:352] Setting JSON to false
	I0919 19:41:51.067934   38345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5055,"bootTime":1726769856,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:41:51.068027   38345 start.go:139] virtualization: kvm guest
	I0919 19:41:51.070625   38345 out.go:177] * [ha-076992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:41:51.072277   38345 notify.go:220] Checking for updates...
	I0919 19:41:51.072286   38345 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:41:51.073688   38345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:41:51.074995   38345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:41:51.076433   38345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:41:51.077635   38345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:41:51.079187   38345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:41:51.081044   38345 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:41:51.081593   38345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:41:51.081643   38345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:41:51.097008   38345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0919 19:41:51.097421   38345 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:41:51.098014   38345 main.go:141] libmachine: Using API Version  1
	I0919 19:41:51.098046   38345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:41:51.098359   38345 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:41:51.098548   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:41:51.098768   38345 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:41:51.099069   38345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:41:51.099137   38345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:41:51.114319   38345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I0919 19:41:51.114751   38345 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:41:51.115271   38345 main.go:141] libmachine: Using API Version  1
	I0919 19:41:51.115291   38345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:41:51.115587   38345 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:41:51.115726   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:41:51.153376   38345 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 19:41:51.154661   38345 start.go:297] selected driver: kvm2
	I0919 19:41:51.154673   38345 start.go:901] validating driver "kvm2" against &{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:41:51.154816   38345 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:41:51.155108   38345 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:41:51.155174   38345 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 19:41:51.171058   38345 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 19:41:51.171779   38345 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:41:51.171814   38345 cni.go:84] Creating CNI manager for ""
	I0919 19:41:51.171850   38345 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 19:41:51.171911   38345 start.go:340] cluster config:
	{Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:41:51.172071   38345 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:41:51.174030   38345 out.go:177] * Starting "ha-076992" primary control-plane node in "ha-076992" cluster
	I0919 19:41:51.175585   38345 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:41:51.175623   38345 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 19:41:51.175641   38345 cache.go:56] Caching tarball of preloaded images
	I0919 19:41:51.175721   38345 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 19:41:51.175730   38345 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:41:51.175840   38345 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/config.json ...
	I0919 19:41:51.176023   38345 start.go:360] acquireMachinesLock for ha-076992: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 19:41:51.176074   38345 start.go:364] duration metric: took 33.672µs to acquireMachinesLock for "ha-076992"
	I0919 19:41:51.176090   38345 start.go:96] Skipping create...Using existing machine configuration
	I0919 19:41:51.176097   38345 fix.go:54] fixHost starting: 
	I0919 19:41:51.176345   38345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:41:51.176373   38345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:41:51.191004   38345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0919 19:41:51.191479   38345 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:41:51.191945   38345 main.go:141] libmachine: Using API Version  1
	I0919 19:41:51.191974   38345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:41:51.192308   38345 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:41:51.192513   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:41:51.192661   38345 main.go:141] libmachine: (ha-076992) Calling .GetState
	I0919 19:41:51.194316   38345 fix.go:112] recreateIfNeeded on ha-076992: state=Running err=<nil>
	W0919 19:41:51.194351   38345 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 19:41:51.197237   38345 out.go:177] * Updating the running kvm2 "ha-076992" VM ...
	I0919 19:41:51.198347   38345 machine.go:93] provisionDockerMachine start ...
	I0919 19:41:51.198365   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:41:51.198572   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:51.201289   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.201659   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:51.201681   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.201879   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:41:51.202027   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.202158   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.202295   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:41:51.202432   38345 main.go:141] libmachine: Using SSH client type: native
	I0919 19:41:51.202610   38345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:41:51.202622   38345 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 19:41:51.306779   38345 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:41:51.306814   38345 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:41:51.307060   38345 buildroot.go:166] provisioning hostname "ha-076992"
	I0919 19:41:51.307098   38345 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:41:51.307349   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:51.310467   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.310932   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:51.310960   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.311189   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:41:51.311371   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.311537   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.311694   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:41:51.311864   38345 main.go:141] libmachine: Using SSH client type: native
	I0919 19:41:51.312087   38345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:41:51.312105   38345 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076992 && echo "ha-076992" | sudo tee /etc/hostname
	I0919 19:41:51.431192   38345 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076992
	
	I0919 19:41:51.431217   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:51.433945   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.434313   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:51.434340   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.434518   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:41:51.434687   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.434821   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:51.434938   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:41:51.435095   38345 main.go:141] libmachine: Using SSH client type: native
	I0919 19:41:51.435266   38345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:41:51.435279   38345 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076992/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:41:51.538363   38345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:41:51.538393   38345 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 19:41:51.538416   38345 buildroot.go:174] setting up certificates
	I0919 19:41:51.538429   38345 provision.go:84] configureAuth start
	I0919 19:41:51.538441   38345 main.go:141] libmachine: (ha-076992) Calling .GetMachineName
	I0919 19:41:51.538759   38345 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:41:51.541627   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.541958   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:51.541985   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.542077   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:51.544636   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.544979   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:51.545005   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:51.545156   38345 provision.go:143] copyHostCerts
	I0919 19:41:51.545192   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:41:51.545245   38345 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 19:41:51.545260   38345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 19:41:51.545341   38345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 19:41:51.545474   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:41:51.545503   38345 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 19:41:51.545512   38345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 19:41:51.545551   38345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 19:41:51.545612   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:41:51.545636   38345 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 19:41:51.545644   38345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 19:41:51.545681   38345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 19:41:51.545742   38345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.ha-076992 san=[127.0.0.1 192.168.39.173 ha-076992 localhost minikube]
	I0919 19:41:52.108598   38345 provision.go:177] copyRemoteCerts
	I0919 19:41:52.108670   38345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:41:52.108698   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:52.111307   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:52.111637   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:52.111669   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:52.111801   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:41:52.111977   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:52.112116   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:41:52.112276   38345 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:41:52.194360   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:41:52.194435   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 19:41:52.225736   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:41:52.225840   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 19:41:52.252627   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:41:52.252705   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 19:41:52.280715   38345 provision.go:87] duration metric: took 742.273973ms to configureAuth
	I0919 19:41:52.280742   38345 buildroot.go:189] setting minikube options for container-runtime
	I0919 19:41:52.280960   38345 config.go:182] Loaded profile config "ha-076992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:41:52.281037   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:41:52.284092   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:52.284397   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:41:52.284425   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:41:52.284561   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:41:52.284749   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:52.284902   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:41:52.285009   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:41:52.285193   38345 main.go:141] libmachine: Using SSH client type: native
	I0919 19:41:52.285360   38345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:41:52.285375   38345 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:43:27.085606   38345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:43:27.085636   38345 machine.go:96] duration metric: took 1m35.887277351s to provisionDockerMachine
	I0919 19:43:27.085648   38345 start.go:293] postStartSetup for "ha-076992" (driver="kvm2")
	I0919 19:43:27.085658   38345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:43:27.085676   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.085961   38345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:43:27.085988   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:43:27.089131   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.089558   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.089572   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.089744   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:43:27.089878   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.089982   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:43:27.090056   38345 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:43:27.173159   38345 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:43:27.177958   38345 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 19:43:27.177990   38345 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 19:43:27.178063   38345 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 19:43:27.178162   38345 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 19:43:27.178178   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 19:43:27.178303   38345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:43:27.188127   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:43:27.212965   38345 start.go:296] duration metric: took 127.302484ms for postStartSetup
	I0919 19:43:27.213014   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.213350   38345 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0919 19:43:27.213404   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:43:27.216215   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.216646   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.216668   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.216881   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:43:27.217049   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.217251   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:43:27.217392   38345 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	W0919 19:43:27.300993   38345 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0919 19:43:27.301024   38345 fix.go:56] duration metric: took 1m36.124925788s for fixHost
	I0919 19:43:27.301050   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:43:27.303604   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.303914   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.303948   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.304085   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:43:27.304291   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.304461   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.304609   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:43:27.304760   38345 main.go:141] libmachine: Using SSH client type: native
	I0919 19:43:27.304919   38345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I0919 19:43:27.304928   38345 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 19:43:27.406310   38345 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726775007.369599564
	
	I0919 19:43:27.406347   38345 fix.go:216] guest clock: 1726775007.369599564
	I0919 19:43:27.406360   38345 fix.go:229] Guest: 2024-09-19 19:43:27.369599564 +0000 UTC Remote: 2024-09-19 19:43:27.3010336 +0000 UTC m=+96.269376252 (delta=68.565964ms)
	I0919 19:43:27.406423   38345 fix.go:200] guest clock delta is within tolerance: 68.565964ms
	I0919 19:43:27.406439   38345 start.go:83] releasing machines lock for "ha-076992", held for 1m36.230345045s
	I0919 19:43:27.406469   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.406787   38345 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:43:27.409673   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.410176   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.410198   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.410433   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.410956   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.411101   38345 main.go:141] libmachine: (ha-076992) Calling .DriverName
	I0919 19:43:27.411197   38345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:43:27.411251   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:43:27.411385   38345 ssh_runner.go:195] Run: cat /version.json
	I0919 19:43:27.411413   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHHostname
	I0919 19:43:27.413881   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.413914   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.414272   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.414297   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.414363   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:43:27.414397   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:43:27.414459   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:43:27.414556   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHPort
	I0919 19:43:27.414635   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.414705   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHKeyPath
	I0919 19:43:27.414761   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:43:27.414879   38345 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:43:27.414893   38345 main.go:141] libmachine: (ha-076992) Calling .GetSSHUsername
	I0919 19:43:27.415019   38345 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/ha-076992/id_rsa Username:docker}
	I0919 19:43:27.522678   38345 ssh_runner.go:195] Run: systemctl --version
	I0919 19:43:27.530229   38345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:43:27.705047   38345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 19:43:27.714139   38345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 19:43:27.714207   38345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:43:27.732607   38345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 19:43:27.732635   38345 start.go:495] detecting cgroup driver to use...
	I0919 19:43:27.732707   38345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:43:27.757103   38345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:43:27.775322   38345 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:43:27.775381   38345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:43:27.791966   38345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:43:27.808063   38345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:43:27.978278   38345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:43:28.129140   38345 docker.go:233] disabling docker service ...
	I0919 19:43:28.129221   38345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:43:28.147509   38345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:43:28.161825   38345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:43:28.311156   38345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:43:28.463044   38345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:43:28.480233   38345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:43:28.501653   38345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:43:28.501709   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.513925   38345 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:43:28.513993   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.524367   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.535580   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.546452   38345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:43:28.557258   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.568931   38345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.580957   38345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:43:28.591745   38345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:43:28.601246   38345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:43:28.610671   38345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:43:28.761786   38345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:45:03.031020   38345 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.269187268s)
	I0919 19:45:03.031054   38345 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:45:03.031114   38345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:45:03.042480   38345 start.go:563] Will wait 60s for crictl version
	I0919 19:45:03.042557   38345 ssh_runner.go:195] Run: which crictl
	I0919 19:45:03.047092   38345 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:45:03.087780   38345 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 19:45:03.087883   38345 ssh_runner.go:195] Run: crio --version
	I0919 19:45:03.119120   38345 ssh_runner.go:195] Run: crio --version
	I0919 19:45:03.150532   38345 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 19:45:03.152024   38345 main.go:141] libmachine: (ha-076992) Calling .GetIP
	I0919 19:45:03.154995   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:45:03.155411   38345 main.go:141] libmachine: (ha-076992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:f5:95", ip: ""} in network mk-ha-076992: {Iface:virbr1 ExpiryTime:2024-09-19 20:25:05 +0000 UTC Type:0 Mac:52:54:00:7d:f5:95 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-076992 Clientid:01:52:54:00:7d:f5:95}
	I0919 19:45:03.155442   38345 main.go:141] libmachine: (ha-076992) DBG | domain ha-076992 has defined IP address 192.168.39.173 and MAC address 52:54:00:7d:f5:95 in network mk-ha-076992
	I0919 19:45:03.155707   38345 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 19:45:03.160775   38345 kubeadm.go:883] updating cluster {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 19:45:03.160962   38345 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:45:03.161031   38345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:45:03.212758   38345 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:45:03.212781   38345 crio.go:433] Images already preloaded, skipping extraction
	I0919 19:45:03.212830   38345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:45:03.248443   38345 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:45:03.248466   38345 cache_images.go:84] Images are preloaded, skipping loading
	I0919 19:45:03.248474   38345 kubeadm.go:934] updating node { 192.168.39.173 8443 v1.31.1 crio true true} ...
	I0919 19:45:03.248577   38345 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:45:03.248655   38345 ssh_runner.go:195] Run: crio config
	I0919 19:45:03.297116   38345 cni.go:84] Creating CNI manager for ""
	I0919 19:45:03.297139   38345 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 19:45:03.297149   38345 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 19:45:03.297173   38345 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076992 NodeName:ha-076992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 19:45:03.297345   38345 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 19:45:03.297385   38345 kube-vip.go:115] generating kube-vip config ...
	I0919 19:45:03.297438   38345 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0919 19:45:03.309280   38345 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:45:03.309422   38345 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:45:03.309493   38345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:45:03.319454   38345 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 19:45:03.319529   38345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 19:45:03.329560   38345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0919 19:45:03.347142   38345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:45:03.364688   38345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0919 19:45:03.382364   38345 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:45:03.401000   38345 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:45:03.405303   38345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:45:03.574039   38345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:45:03.592321   38345 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992 for IP: 192.168.39.173
	I0919 19:45:03.592346   38345 certs.go:194] generating shared ca certs ...
	I0919 19:45:03.592365   38345 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:45:03.592560   38345 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 19:45:03.592606   38345 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 19:45:03.592616   38345 certs.go:256] generating profile certs ...
	I0919 19:45:03.592697   38345 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/client.key
	I0919 19:45:03.592722   38345 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.ac18dafb
	I0919 19:45:03.592736   38345 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.ac18dafb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.173 192.168.39.232 192.168.39.254]
	I0919 19:45:03.716352   38345 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.ac18dafb ...
	I0919 19:45:03.716381   38345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.ac18dafb: {Name:mk624593ea726a4612aef684462b753c5c1d410c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:45:03.716562   38345 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.ac18dafb ...
	I0919 19:45:03.716575   38345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.ac18dafb: {Name:mkae76db79ccd8bfba7b2fba484d92ae36183f4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:45:03.716649   38345 certs.go:381] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt.ac18dafb -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt
	I0919 19:45:03.716799   38345 certs.go:385] copying /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key.ac18dafb -> /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key
	I0919 19:45:03.716928   38345 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key
	I0919 19:45:03.716943   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:45:03.716956   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:45:03.716969   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:45:03.716982   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:45:03.716993   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:45:03.717007   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:45:03.717024   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:45:03.717037   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:45:03.717119   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 19:45:03.717176   38345 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 19:45:03.717187   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 19:45:03.717212   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 19:45:03.717234   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:45:03.717259   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 19:45:03.717299   38345 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 19:45:03.717325   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 19:45:03.717340   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:45:03.717352   38345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 19:45:03.717895   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:45:03.744350   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:45:03.768605   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:45:03.793076   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 19:45:03.817444   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 19:45:03.843671   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 19:45:03.868983   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:45:03.895503   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/ha-076992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 19:45:03.920257   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 19:45:03.945863   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:45:03.975789   38345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 19:45:04.002459   38345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 19:45:04.022325   38345 ssh_runner.go:195] Run: openssl version
	I0919 19:45:04.029250   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:45:04.041139   38345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:45:04.045653   38345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:45:04.045698   38345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:45:04.051653   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:45:04.062051   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 19:45:04.073188   38345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 19:45:04.077668   38345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 19:45:04.077733   38345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 19:45:04.083710   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 19:45:04.093610   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 19:45:04.105082   38345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 19:45:04.110423   38345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 19:45:04.110495   38345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 19:45:04.116561   38345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:45:04.127777   38345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:45:04.133608   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 19:45:04.140345   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 19:45:04.146685   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 19:45:04.152617   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 19:45:04.158387   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 19:45:04.164448   38345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 19:45:04.170533   38345 kubeadm.go:392] StartCluster: {Name:ha-076992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-076992 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.157 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:45:04.170649   38345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 19:45:04.170691   38345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 19:45:04.209807   38345 cri.go:89] found id: "004cf0a26efe0dddc4e450f94e67c7df5e707c66f3ba4e781ab0ace2f1b17ac0"
	I0919 19:45:04.209838   38345 cri.go:89] found id: "44e35509c3580ae68666a4c35123292f1fb22a56ba1636dfd217d34a6a6e441c"
	I0919 19:45:04.209845   38345 cri.go:89] found id: "2e1f4501fff9a38dde8bb1b0c781368f125ccae30e7cd1a6042ebc1649f7cd81"
	I0919 19:45:04.209849   38345 cri.go:89] found id: "63df2e8772528c1c649ba71943a50c5a9584fc0c35d1e10002a0188afe543524"
	I0919 19:45:04.209853   38345 cri.go:89] found id: "4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00"
	I0919 19:45:04.209858   38345 cri.go:89] found id: "c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730"
	I0919 19:45:04.209862   38345 cri.go:89] found id: "6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3"
	I0919 19:45:04.209866   38345 cri.go:89] found id: "cfb4ace0f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79"
	I0919 19:45:04.209870   38345 cri.go:89] found id: "b344ac64a2b998915ace13c79db6455320b4234dac25c23d10d7757629b3f372"
	I0919 19:45:04.209877   38345 cri.go:89] found id: "2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5"
	I0919 19:45:04.209881   38345 cri.go:89] found id: "262c164bf25b4edae1fa88ae749e41c788b96fff74e6cbd2daf9817de1b938b8"
	I0919 19:45:04.209885   38345 cri.go:89] found id: "d6a80e020160808614ad455e5861dfba6ad8d49246f044c4917d5bdf078bb15c"
	I0919 19:45:04.209889   38345 cri.go:89] found id: "611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c"
	I0919 19:45:04.209907   38345 cri.go:89] found id: "17ef846dadbee35f41487257630426b45330cd41a5a5f57cbed9b0c7c3eb10e3"
	I0919 19:45:04.209915   38345 cri.go:89] found id: "cbaa19f6b3857c587ef708f0d211f7ada8173b9ff211f786082b7d72e6d1cac0"
	I0919 19:45:04.209918   38345 cri.go:89] found id: "d623b5f012d8ab63604fec73af4f3bfe462c7cf5e360b52492b1a277c57b50b4"
	I0919 19:45:04.209922   38345 cri.go:89] found id: "9d62ecb2cc70abfa8924242baf95ce4232980a8567f8268a5fde9b0f2dcb05d2"
	I0919 19:45:04.209926   38345 cri.go:89] found id: "5745c8d186325d5f12aad1c627edc6c69c499973d88317622cf80aa81fc69ac1"
	I0919 19:45:04.209928   38345 cri.go:89] found id: "3beffc038ef33441119735dafe7d2f052b2ba7b7063958c10b1822a5e2ac1018"
	I0919 19:45:04.209931   38345 cri.go:89] found id: ""
	I0919 19:45:04.209972   38345 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.167469372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775691167444554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dff12220-38e8-4a5f-9635-878865bbab3c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.168167520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dceb39c8-9a8c-47af-9877-45e1b3190778 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.168236167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dceb39c8-9a8c-47af-9877-45e1b3190778 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.168609988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e37e132a7d47612569e8fa62c58df13b158cbf3298b6e2b508383bc6aa81a1e7,PodSandboxId:1bc4922546486406027e92007a85dc358f4b5fa43590178182c48cd2370b6ab9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726775296400957175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569cc465916e3a55595915b50f72cb99c942e304f1d9fafb98d4dd24f90f6e15,PodSandboxId:98c3484593d70bf7c2b1c1cb9e32f174110a624a65590edc4e79f9ae75799fa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726775279400458451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc8f047f0e800cd74c4ee6c30beb3fa49f8e36b8654b5367fd95246a2c5d6f8,PodSandboxId:d17315140a3bcd63ec70a58f9e0931096ef69d5639bffb77fb37fb1ee11233ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726775276398570870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3081db7aeeaa28cb8a2b7919f12fa93918b39164de4dd2d3443c379d6b87d,PodSandboxId:d17315140a3bcd63ec70a58f9e0931096ef69d5639bffb77fb37fb1ee11233ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775175403440002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6348fde1f6d938c28d9560b2606d67c26d75abb8097e420ee3d798d47865d0,PodSandboxId:98c3484593d70bf7c2b1c1cb9e32f174110a624a65590edc4e79f9ae75799fa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775174399598617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20db48879be4f4abf7169147f02da35db5bf21f9008c9c5c301201754558371,PodSandboxId:a2dfb098151e7416d9e5b6bfa4202be7e980e3c5a186c4495e9dd56e05ed2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726775142672224944,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b6d53533e8bbdf83feeca07dfd8af6f77e4cecc4437ec3811219a913e5d93a,PodSandboxId:1bc4922546486406027e92007a85dc358f4b5fa43590178182c48cd2370b6ab9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726775109853874023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118f4002887380620900e3fdefe53082dec9e4081ccb8a63838b74d6a204d5da,PodSandboxId:5a40b56b7fcec33fd3b4bc219e741f7164fce109a8ef43fc02ee677899267593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726775109723526787,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},&Container{Id:e0f34c5e0c76fb670dbfe8fd1cab537fee4affae7a5ff1dd5acf436ba3cb668a,PodSandboxId:a5a8e867bc079b02b4114aa1c09690d4824428eaf1e47f4cb78a35b50414652a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726775109533118014,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:32aead7e6d36aa4cd159c437bf90339360482f9c9985298380fb3396ca7b6303,PodSandboxId:6684d20e392ec248cb3a6d90ffdfdac47406a6233a3797ec4c83e2e4dccab5aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726775109452468990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3bf1fc11a18b7ad1facf078
4085bfae164ef018dd1c43d2b60585af25d77eb,PodSandboxId:e2ff30e0745299795fe63b14176b9c426a24cba267469838219b5e4b2f4288e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726775109770052627,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b749431ff13b8dd2ed0c76d40237fcb4abc0d835ac213338ceb27e1a3e37063,PodSandboxId:515c744a421fd721b2a6c6714e57629904781482a2941180c968474432bae7f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726775109365754963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c24c13be07e64dda2e80724703bc9b29ba428e216b991dd494bf886bf5e58e7,PodSandboxId:ad1ed21ce6ce26fc2cda86a2e5e31fda47fc87d39fe1a312b79be4d276678777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726775109203560518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774582732677453,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726774564362708272,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726774549595372991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e386f72e5d37
98428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726774549590283036,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0f3e597ba737236f8b2d73821f37c3b98501414f9
7261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726774549543229690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db6455320b4234dac25c23d10d7757629b3f372,PodSa
ndboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726774549399914222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726774549303575711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726774544774770713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dceb39c8-9a8c-47af-9877-45e1b3190778 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.213398770Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae60248a-4d13-4b1f-bdea-7bd30219184d name=/runtime.v1.RuntimeService/Version
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.213489268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae60248a-4d13-4b1f-bdea-7bd30219184d name=/runtime.v1.RuntimeService/Version
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.214885125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5f11e87-d07d-4079-8c02-568b5e16165e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.215363586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775691215340301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5f11e87-d07d-4079-8c02-568b5e16165e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.216081163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=420a5c5f-2aa7-4909-804b-48b9808f3df9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.216148897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=420a5c5f-2aa7-4909-804b-48b9808f3df9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.216537206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e37e132a7d47612569e8fa62c58df13b158cbf3298b6e2b508383bc6aa81a1e7,PodSandboxId:1bc4922546486406027e92007a85dc358f4b5fa43590178182c48cd2370b6ab9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726775296400957175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569cc465916e3a55595915b50f72cb99c942e304f1d9fafb98d4dd24f90f6e15,PodSandboxId:98c3484593d70bf7c2b1c1cb9e32f174110a624a65590edc4e79f9ae75799fa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726775279400458451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc8f047f0e800cd74c4ee6c30beb3fa49f8e36b8654b5367fd95246a2c5d6f8,PodSandboxId:d17315140a3bcd63ec70a58f9e0931096ef69d5639bffb77fb37fb1ee11233ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726775276398570870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3081db7aeeaa28cb8a2b7919f12fa93918b39164de4dd2d3443c379d6b87d,PodSandboxId:d17315140a3bcd63ec70a58f9e0931096ef69d5639bffb77fb37fb1ee11233ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775175403440002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6348fde1f6d938c28d9560b2606d67c26d75abb8097e420ee3d798d47865d0,PodSandboxId:98c3484593d70bf7c2b1c1cb9e32f174110a624a65590edc4e79f9ae75799fa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775174399598617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20db48879be4f4abf7169147f02da35db5bf21f9008c9c5c301201754558371,PodSandboxId:a2dfb098151e7416d9e5b6bfa4202be7e980e3c5a186c4495e9dd56e05ed2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726775142672224944,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b6d53533e8bbdf83feeca07dfd8af6f77e4cecc4437ec3811219a913e5d93a,PodSandboxId:1bc4922546486406027e92007a85dc358f4b5fa43590178182c48cd2370b6ab9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726775109853874023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118f4002887380620900e3fdefe53082dec9e4081ccb8a63838b74d6a204d5da,PodSandboxId:5a40b56b7fcec33fd3b4bc219e741f7164fce109a8ef43fc02ee677899267593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726775109723526787,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},&Container{Id:e0f34c5e0c76fb670dbfe8fd1cab537fee4affae7a5ff1dd5acf436ba3cb668a,PodSandboxId:a5a8e867bc079b02b4114aa1c09690d4824428eaf1e47f4cb78a35b50414652a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726775109533118014,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:32aead7e6d36aa4cd159c437bf90339360482f9c9985298380fb3396ca7b6303,PodSandboxId:6684d20e392ec248cb3a6d90ffdfdac47406a6233a3797ec4c83e2e4dccab5aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726775109452468990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3bf1fc11a18b7ad1facf078
4085bfae164ef018dd1c43d2b60585af25d77eb,PodSandboxId:e2ff30e0745299795fe63b14176b9c426a24cba267469838219b5e4b2f4288e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726775109770052627,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b749431ff13b8dd2ed0c76d40237fcb4abc0d835ac213338ceb27e1a3e37063,PodSandboxId:515c744a421fd721b2a6c6714e57629904781482a2941180c968474432bae7f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726775109365754963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c24c13be07e64dda2e80724703bc9b29ba428e216b991dd494bf886bf5e58e7,PodSandboxId:ad1ed21ce6ce26fc2cda86a2e5e31fda47fc87d39fe1a312b79be4d276678777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726775109203560518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774582732677453,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726774564362708272,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726774549595372991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e386f72e5d37
98428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726774549590283036,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0f3e597ba737236f8b2d73821f37c3b98501414f9
7261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726774549543229690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db6455320b4234dac25c23d10d7757629b3f372,PodSa
ndboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726774549399914222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726774549303575711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726774544774770713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=420a5c5f-2aa7-4909-804b-48b9808f3df9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.260781492Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b086300-7321-490f-88bb-37f4bae8649d name=/runtime.v1.RuntimeService/Version
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.260874577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b086300-7321-490f-88bb-37f4bae8649d name=/runtime.v1.RuntimeService/Version
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.262396044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86d264d2-cf1e-4dc3-bf07-c6dc17f0a67f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.262843899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775691262820489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86d264d2-cf1e-4dc3-bf07-c6dc17f0a67f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.263418068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf263bc3-f766-4aa0-931a-f10d92e4f487 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.263487554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf263bc3-f766-4aa0-931a-f10d92e4f487 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.263876627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e37e132a7d47612569e8fa62c58df13b158cbf3298b6e2b508383bc6aa81a1e7,PodSandboxId:1bc4922546486406027e92007a85dc358f4b5fa43590178182c48cd2370b6ab9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726775296400957175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569cc465916e3a55595915b50f72cb99c942e304f1d9fafb98d4dd24f90f6e15,PodSandboxId:98c3484593d70bf7c2b1c1cb9e32f174110a624a65590edc4e79f9ae75799fa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726775279400458451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc8f047f0e800cd74c4ee6c30beb3fa49f8e36b8654b5367fd95246a2c5d6f8,PodSandboxId:d17315140a3bcd63ec70a58f9e0931096ef69d5639bffb77fb37fb1ee11233ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726775276398570870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3081db7aeeaa28cb8a2b7919f12fa93918b39164de4dd2d3443c379d6b87d,PodSandboxId:d17315140a3bcd63ec70a58f9e0931096ef69d5639bffb77fb37fb1ee11233ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775175403440002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6348fde1f6d938c28d9560b2606d67c26d75abb8097e420ee3d798d47865d0,PodSandboxId:98c3484593d70bf7c2b1c1cb9e32f174110a624a65590edc4e79f9ae75799fa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775174399598617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20db48879be4f4abf7169147f02da35db5bf21f9008c9c5c301201754558371,PodSandboxId:a2dfb098151e7416d9e5b6bfa4202be7e980e3c5a186c4495e9dd56e05ed2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726775142672224944,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b6d53533e8bbdf83feeca07dfd8af6f77e4cecc4437ec3811219a913e5d93a,PodSandboxId:1bc4922546486406027e92007a85dc358f4b5fa43590178182c48cd2370b6ab9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726775109853874023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118f4002887380620900e3fdefe53082dec9e4081ccb8a63838b74d6a204d5da,PodSandboxId:5a40b56b7fcec33fd3b4bc219e741f7164fce109a8ef43fc02ee677899267593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726775109723526787,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},&Container{Id:e0f34c5e0c76fb670dbfe8fd1cab537fee4affae7a5ff1dd5acf436ba3cb668a,PodSandboxId:a5a8e867bc079b02b4114aa1c09690d4824428eaf1e47f4cb78a35b50414652a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726775109533118014,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:32aead7e6d36aa4cd159c437bf90339360482f9c9985298380fb3396ca7b6303,PodSandboxId:6684d20e392ec248cb3a6d90ffdfdac47406a6233a3797ec4c83e2e4dccab5aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726775109452468990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3bf1fc11a18b7ad1facf078
4085bfae164ef018dd1c43d2b60585af25d77eb,PodSandboxId:e2ff30e0745299795fe63b14176b9c426a24cba267469838219b5e4b2f4288e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726775109770052627,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b749431ff13b8dd2ed0c76d40237fcb4abc0d835ac213338ceb27e1a3e37063,PodSandboxId:515c744a421fd721b2a6c6714e57629904781482a2941180c968474432bae7f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726775109365754963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c24c13be07e64dda2e80724703bc9b29ba428e216b991dd494bf886bf5e58e7,PodSandboxId:ad1ed21ce6ce26fc2cda86a2e5e31fda47fc87d39fe1a312b79be4d276678777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726775109203560518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774582732677453,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726774564362708272,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726774549595372991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e386f72e5d37
98428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726774549590283036,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0f3e597ba737236f8b2d73821f37c3b98501414f9
7261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726774549543229690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db6455320b4234dac25c23d10d7757629b3f372,PodSa
ndboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726774549399914222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726774549303575711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726774544774770713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf263bc3-f766-4aa0-931a-f10d92e4f487 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.306495349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1c8048a-9721-4c7e-9b45-510fa3100546 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.306592425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1c8048a-9721-4c7e-9b45-510fa3100546 name=/runtime.v1.RuntimeService/Version
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.308270172Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a9c6a50-9296-45f5-ac8f-80ca401b26aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.308682831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775691308660034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a9c6a50-9296-45f5-ac8f-80ca401b26aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.309219585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa7e8017-cff8-4aff-ab78-6e9380968102 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.309303876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa7e8017-cff8-4aff-ab78-6e9380968102 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 19:54:51 ha-076992 crio[6407]: time="2024-09-19 19:54:51.309666721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e37e132a7d47612569e8fa62c58df13b158cbf3298b6e2b508383bc6aa81a1e7,PodSandboxId:1bc4922546486406027e92007a85dc358f4b5fa43590178182c48cd2370b6ab9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726775296400957175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569cc465916e3a55595915b50f72cb99c942e304f1d9fafb98d4dd24f90f6e15,PodSandboxId:98c3484593d70bf7c2b1c1cb9e32f174110a624a65590edc4e79f9ae75799fa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726775279400458451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 5,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc8f047f0e800cd74c4ee6c30beb3fa49f8e36b8654b5367fd95246a2c5d6f8,PodSandboxId:d17315140a3bcd63ec70a58f9e0931096ef69d5639bffb77fb37fb1ee11233ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726775276398570870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3081db7aeeaa28cb8a2b7919f12fa93918b39164de4dd2d3443c379d6b87d,PodSandboxId:d17315140a3bcd63ec70a58f9e0931096ef69d5639bffb77fb37fb1ee11233ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775175403440002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5aa3049515e8c07c16189cb9b261d4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae6348fde1f6d938c28d9560b2606d67c26d75abb8097e420ee3d798d47865d0,PodSandboxId:98c3484593d70bf7c2b1c1cb9e32f174110a624a65590edc4e79f9ae75799fa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775174399598617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b693200c7b44d836573bbd57560a83e1,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20db48879be4f4abf7169147f02da35db5bf21f9008c9c5c301201754558371,PodSandboxId:a2dfb098151e7416d9e5b6bfa4202be7e980e3c5a186c4495e9dd56e05ed2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726775142672224944,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b6d53533e8bbdf83feeca07dfd8af6f77e4cecc4437ec3811219a913e5d93a,PodSandboxId:1bc4922546486406027e92007a85dc358f4b5fa43590178182c48cd2370b6ab9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726775109853874023,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7964879c-5097-490e-b1ba-dd41091ca283,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:118f4002887380620900e3fdefe53082dec9e4081ccb8a63838b74d6a204d5da,PodSandboxId:5a40b56b7fcec33fd3b4bc219e741f7164fce109a8ef43fc02ee677899267593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726775109723526787,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},&Container{Id:e0f34c5e0c76fb670dbfe8fd1cab537fee4affae7a5ff1dd5acf436ba3cb668a,PodSandboxId:a5a8e867bc079b02b4114aa1c09690d4824428eaf1e47f4cb78a35b50414652a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726775109533118014,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:32aead7e6d36aa4cd159c437bf90339360482f9c9985298380fb3396ca7b6303,PodSandboxId:6684d20e392ec248cb3a6d90ffdfdac47406a6233a3797ec4c83e2e4dccab5aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726775109452468990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3bf1fc11a18b7ad1facf078
4085bfae164ef018dd1c43d2b60585af25d77eb,PodSandboxId:e2ff30e0745299795fe63b14176b9c426a24cba267469838219b5e4b2f4288e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726775109770052627,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b749431ff13b8dd2ed0c76d40237fcb4abc0d835ac213338ceb27e1a3e37063,PodSandboxId:515c744a421fd721b2a6c6714e57629904781482a2941180c968474432bae7f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726775109365754963,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c24c13be07e64dda2e80724703bc9b29ba428e216b991dd494bf886bf5e58e7,PodSandboxId:ad1ed21ce6ce26fc2cda86a2e5e31fda47fc87d39fe1a312b79be4d276678777,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726775109203560518,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cfb43f1ef0cec8698f48548619510da03d07c5cade1bfa77a6a1d76caf13f0,PodSandboxId:8772b407d7c257913f2f56b9e5afc65bc9712cbdde5255fd75d4a9f7f5cbdd2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726774582732677453,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8wfb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9e5cd58-874f-41c6-8c0a-d37b5101a1f9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4526c50933cabab1163f9e4e7c2aad2c372f27b9f34678935885748e0516df00,PodSandboxId:4f59647076dbb0c5c829f67a8cb4cd6223d23d833ca54c7d0bee15ce868f968a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1726774564362708272,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22afd76430fe0849caa93fde9d59c02f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730,PodSandboxId:8209dcfdd30b45b8a6b50b5c1b17cddaf93fae7b7b02b92919451bdf26632e45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726774549595372991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d8dc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d522b18-9ae7-46a9-a6c7-e1560a1822de,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e386f72e5d37
98428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3,PodSandboxId:c194bf9cd1d21bd0b46f66718093914fc206fc0f730f89218f07816aa6c989bd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726774549590283036,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdccd08d-8a5d-4495-8ad3-5591de87862f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb4ace0f3e597ba737236f8b2d73821f37c3b98501414f9
7261fabca9f4cb79,PodSandboxId:e9e69a1062cea909e627e9ebda09fd630aaf82570113dea25b32dfc0c964c235,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726774549543229690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c4b85bfdfb554afca940fe6375dba9,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b344ac64a2b998915ace13c79db6455320b4234dac25c23d10d7757629b3f372,PodSa
ndboxId:80031de6f892161d7a5a8defc63d8b99bec57cf7e1227fb81a5e85adb43ca85c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726774549399914222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bst8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165f4eae-fc28-4b50-b35f-f61f95d9872a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5,PodSandboxId:fb62ba74ee7f1b07e5fd7d0172b7d15d369873d0ae1974a90bc2adc2e2fb3d49,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726774549303575711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b7783d18d62d18697a4d1aa0ff5755,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c,PodSandboxId:257eb8bdca5fb0c3762a4378322793248d1310495036962c500c43ba6a2c2fad,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726774544774770713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nbds4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ceb0f8-a15c-405e-b0ed-d54a8bfe332f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa7e8017-cff8-4aff-ab78-6e9380968102 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e37e132a7d476       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Running             storage-provisioner       6                   1bc4922546486       storage-provisioner
	569cc465916e3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   6 minutes ago       Running             kube-controller-manager   5                   98c3484593d70       kube-controller-manager-ha-076992
	ebc8f047f0e80       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   6 minutes ago       Running             kube-apiserver            6                   d17315140a3bc       kube-apiserver-ha-076992
	71f3081db7aee       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   8 minutes ago       Exited              kube-apiserver            5                   d17315140a3bc       kube-apiserver-ha-076992
	ae6348fde1f6d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   8 minutes ago       Exited              kube-controller-manager   4                   98c3484593d70       kube-controller-manager-ha-076992
	c20db48879be4       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   9 minutes ago       Running             busybox                   2                   a2dfb098151e7       busybox-7dff88458-8wfb7
	47b6d53533e8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner       5                   1bc4922546486       storage-provisioner
	df3bf1fc11a18       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   2                   e2ff30e074529       coredns-7c65d6cfc9-bst8x
	118f400288738       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   9 minutes ago       Running             kube-vip                  1                   5a40b56b7fcec       kube-vip-ha-076992
	e0f34c5e0c76f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   9 minutes ago       Running             kindnet-cni               2                   a5a8e867bc079       kindnet-j846w
	32aead7e6d36a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                2                   6684d20e392ec       kube-proxy-4d8dc
	9b749431ff13b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   515c744a421fd       etcd-ha-076992
	9c24c13be07e6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   ad1ed21ce6ce2       kube-scheduler-ha-076992
	b1cfb43f1ef0c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   18 minutes ago      Exited              busybox                   1                   8772b407d7c25       busybox-7dff88458-8wfb7
	4526c50933cab       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   18 minutes ago      Exited              kube-vip                  0                   4f59647076dbb       kube-vip-ha-076992
	c412d5b70d043       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   19 minutes ago      Exited              kube-proxy                1                   8209dcfdd30b4       kube-proxy-4d8dc
	6e386f72e5d37       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   19 minutes ago      Exited              kindnet-cni               1                   c194bf9cd1d21       kindnet-j846w
	cfb4ace0f3e59       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   19 minutes ago      Exited              kube-scheduler            1                   e9e69a1062cea       kube-scheduler-ha-076992
	b344ac64a2b99       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 minutes ago      Exited              coredns                   1                   80031de6f8921       coredns-7c65d6cfc9-bst8x
	2810749ec6ddc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   19 minutes ago      Exited              etcd                      1                   fb62ba74ee7f1       etcd-ha-076992
	611497be6a620       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 minutes ago      Exited              coredns                   1                   257eb8bdca5fb       coredns-7c65d6cfc9-nbds4
	
	
	==> coredns [611497be6a620df8c410117651e924c3bf42d67fa914301d490156f6c7a4fa3c] <==
	Trace[1419069143]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:36:10.364)
	Trace[1419069143]: [10.001580214s] [10.001580214s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50042->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50042->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b344ac64a2b998915ace13c79db6455320b4234dac25c23d10d7757629b3f372] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df3bf1fc11a18b7ad1facf0784085bfae164ef018dd1c43d2b60585af25d77eb] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-076992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T19_25_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:54:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:51:50 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:51:50 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:51:50 +0000   Thu, 19 Sep 2024 19:25:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:51:50 +0000   Thu, 19 Sep 2024 19:25:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    ha-076992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 88962b0779f84ff6915974a39d1a24ba
	  System UUID:                88962b07-79f8-4ff6-9159-74a39d1a24ba
	  Boot ID:                    f4736dd6-fd6e-4dc3-b2ee-64f8773325ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8wfb7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 coredns-7c65d6cfc9-bst8x             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 coredns-7c65d6cfc9-nbds4             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-ha-076992                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kindnet-j846w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      29m
	  kube-system                 kube-apiserver-ha-076992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-076992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-4d8dc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-076992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-076992                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 29m                  kube-proxy       
	  Normal   Starting                 8m59s                kube-proxy       
	  Normal   Starting                 18m                  kube-proxy       
	  Normal   NodeHasSufficientPID     29m                  kubelet          Node ha-076992 status is now: NodeHasSufficientPID
	  Normal   Starting                 29m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  29m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  29m                  kubelet          Node ha-076992 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29m                  kubelet          Node ha-076992 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           29m                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   NodeReady                29m                  kubelet          Node ha-076992 status is now: NodeReady
	  Normal   RegisteredNode           28m                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           27m                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Warning  ContainerGCFailed        10m (x5 over 20m)    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             9m54s (x9 over 19m)  kubelet          Node ha-076992 status is now: NodeNotReady
	  Normal   RegisteredNode           8m5s                 node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	  Normal   RegisteredNode           6m50s                node-controller  Node ha-076992 event: Registered Node ha-076992 in Controller
	
	
	Name:               ha-076992-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_26_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:26:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:54:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:51:52 +0000   Thu, 19 Sep 2024 19:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:51:52 +0000   Thu, 19 Sep 2024 19:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:51:52 +0000   Thu, 19 Sep 2024 19:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:51:52 +0000   Thu, 19 Sep 2024 19:36:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-076992-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fbb92a6f6fa49d49b42ed70b015086d
	  System UUID:                7fbb92a6-f6fa-49d4-9b42-ed70b015086d
	  Boot ID:                    0fe45e85-4f9b-481a-8bc8-b98a6c8a000b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c64rv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 etcd-ha-076992-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kindnet-6d8pz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28m
	  kube-system                 kube-apiserver-ha-076992-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-ha-076992-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-tjtfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-ha-076992-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-vip-ha-076992-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 8m7s               kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 28m                kube-proxy       
	  Normal   NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28m                node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal   NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node ha-076992-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           28m                node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal   RegisteredNode           27m                node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal   NodeNotReady             24m                node-controller  Node ha-076992-m02 status is now: NodeNotReady
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node ha-076992-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node ha-076992-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Warning  ContainerGCFailed        9m45s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             9m (x2 over 18m)   kubelet          Node ha-076992-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           8m5s               node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	  Normal   RegisteredNode           6m50s              node-controller  Node ha-076992-m02 event: Registered Node ha-076992-m02 in Controller
	
	
	Name:               ha-076992-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076992-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-076992
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_28_43_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:28:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076992-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:54:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:53:28 +0000   Thu, 19 Sep 2024 19:52:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:53:28 +0000   Thu, 19 Sep 2024 19:52:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:53:28 +0000   Thu, 19 Sep 2024 19:52:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:53:28 +0000   Thu, 19 Sep 2024 19:52:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    ha-076992-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37704cd295b34d23a0864637f4482597
	  System UUID:                37704cd2-95b3-4d23-a086-4637f4482597
	  Boot ID:                    65f9429b-73d9-4408-ad75-80d01d53dcae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wdj7x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kindnet-8jqvd              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-proxy-8gt7w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 15m                  kube-proxy       
	  Normal   Starting                 26m                  kube-proxy       
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  26m (x2 over 26m)    kubelet          Node ha-076992-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  26m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     26m (x2 over 26m)    kubelet          Node ha-076992-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    26m (x2 over 26m)    kubelet          Node ha-076992-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           26m                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   RegisteredNode           26m                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   RegisteredNode           26m                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   NodeReady                25m                  kubelet          Node ha-076992-m04 status is now: NodeReady
	  Normal   RegisteredNode           18m                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   NodeNotReady             17m                  node-controller  Node ha-076992-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           16m                  node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   Starting                 15m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m (x2 over 15m)    kubelet          Node ha-076992-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x2 over 15m)    kubelet          Node ha-076992-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x2 over 15m)    kubelet          Node ha-076992-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 15m                  kubelet          Node ha-076992-m04 has been rebooted, boot id: d8d01324-9af8-448e-92c0-f74eecf4a9a9
	  Normal   NodeReady                15m                  kubelet          Node ha-076992-m04 status is now: NodeReady
	  Normal   NodeNotReady             14m                  node-controller  Node ha-076992-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           8m5s                 node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   RegisteredNode           6m50s                node-controller  Node ha-076992-m04 event: Registered Node ha-076992-m04 in Controller
	  Normal   Starting                 113s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  113s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 113s (x2 over 113s)  kubelet          Node ha-076992-m04 has been rebooted, boot id: 65f9429b-73d9-4408-ad75-80d01d53dcae
	  Normal   NodeHasSufficientMemory  113s (x3 over 113s)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    113s (x3 over 113s)  kubelet          Node ha-076992-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     113s (x3 over 113s)  kubelet          Node ha-076992-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             113s                 kubelet          Node ha-076992-m04 status is now: NodeNotReady
	  Normal   NodeReady                113s                 kubelet          Node ha-076992-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.083682] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.344336] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.503085] kauditd_printk_skb: 38 callbacks suppressed
	[Sep19 19:26] kauditd_printk_skb: 26 callbacks suppressed
	[Sep19 19:35] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +0.145564] systemd-fstab-generator[3559]: Ignoring "noauto" option for root device
	[  +0.177187] systemd-fstab-generator[3573]: Ignoring "noauto" option for root device
	[  +0.146656] systemd-fstab-generator[3585]: Ignoring "noauto" option for root device
	[  +0.269791] systemd-fstab-generator[3613]: Ignoring "noauto" option for root device
	[  +5.037197] systemd-fstab-generator[3707]: Ignoring "noauto" option for root device
	[  +0.092071] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.480192] kauditd_printk_skb: 22 callbacks suppressed
	[Sep19 19:36] kauditd_printk_skb: 87 callbacks suppressed
	[  +9.057023] kauditd_printk_skb: 1 callbacks suppressed
	[ +36.276079] kauditd_printk_skb: 6 callbacks suppressed
	[Sep19 19:43] systemd-fstab-generator[6318]: Ignoring "noauto" option for root device
	[  +0.157053] systemd-fstab-generator[6330]: Ignoring "noauto" option for root device
	[  +0.183090] systemd-fstab-generator[6344]: Ignoring "noauto" option for root device
	[  +0.156712] systemd-fstab-generator[6356]: Ignoring "noauto" option for root device
	[  +0.298764] systemd-fstab-generator[6384]: Ignoring "noauto" option for root device
	[Sep19 19:45] systemd-fstab-generator[6525]: Ignoring "noauto" option for root device
	[  +0.108524] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.221979] kauditd_printk_skb: 12 callbacks suppressed
	[ +18.157265] kauditd_printk_skb: 89 callbacks suppressed
	[Sep19 19:48] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [2810749ec6ddcf1f3f74240e6c9331cbb3fece4fdd30b0b5ec5e7454fddb95c5] <==
	{"level":"info","ts":"2024-09-19T19:41:52.468725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e [term 3] starts to transfer leadership to 9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:41:52.468779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e sends MsgTimeoutNow to 9598478c709eb7 immediately as 9598478c709eb7 already has up-to-date log"}
	{"level":"info","ts":"2024-09-19T19:41:52.471484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e [term: 3] received a MsgVote message with higher term from 9598478c709eb7 [term: 4]"}
	{"level":"info","ts":"2024-09-19T19:41:52.471587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became follower at term 4"}
	{"level":"info","ts":"2024-09-19T19:41:52.471627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e [logterm: 3, index: 3555, vote: 0] cast MsgVote for 9598478c709eb7 [logterm: 3, index: 3555] at term 4"}
	{"level":"info","ts":"2024-09-19T19:41:52.471662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db356cbc19811e0e lost leader db356cbc19811e0e at term 4"}
	{"level":"info","ts":"2024-09-19T19:41:52.474587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db356cbc19811e0e elected leader 9598478c709eb7 at term 4"}
	{"level":"info","ts":"2024-09-19T19:41:52.569116Z","caller":"etcdserver/server.go:1498","msg":"leadership transfer finished","local-member-id":"db356cbc19811e0e","old-leader-member-id":"db356cbc19811e0e","new-leader-member-id":"9598478c709eb7","took":"100.452555ms"}
	{"level":"info","ts":"2024-09-19T19:41:52.569268Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9598478c709eb7"}
	{"level":"warn","ts":"2024-09-19T19:41:52.569661Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:41:52.569710Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9598478c709eb7"}
	{"level":"warn","ts":"2024-09-19T19:41:52.571916Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:41:52.571963Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:41:52.572050Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7"}
	{"level":"warn","ts":"2024-09-19T19:41:52.572169Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","error":"context canceled"}
	{"level":"warn","ts":"2024-09-19T19:41:52.572220Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"9598478c709eb7","error":"failed to read 9598478c709eb7 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-19T19:41:52.572265Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7"}
	{"level":"warn","ts":"2024-09-19T19:41:52.572401Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7","error":"context canceled"}
	{"level":"info","ts":"2024-09-19T19:41:52.572446Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"db356cbc19811e0e","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:41:52.572465Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9598478c709eb7"}
	{"level":"info","ts":"2024-09-19T19:41:52.578783Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.173:2380"}
	{"level":"warn","ts":"2024-09-19T19:41:52.579147Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.232:50370","server-name":"","error":"read tcp 192.168.39.173:2380->192.168.39.232:50370: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T19:41:52.579255Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.232:50376","server-name":"","error":"read tcp 192.168.39.173:2380->192.168.39.232:50376: use of closed network connection"}
	{"level":"info","ts":"2024-09-19T19:41:53.578898Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.173:2380"}
	{"level":"info","ts":"2024-09-19T19:41:53.578965Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-076992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.173:2380"],"advertise-client-urls":["https://192.168.39.173:2379"]}
	
	
	==> etcd [9b749431ff13b8dd2ed0c76d40237fcb4abc0d835ac213338ceb27e1a3e37063] <==
	{"level":"warn","ts":"2024-09-19T19:46:40.837558Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9598478c709eb7","rtt":"0s","error":"dial tcp 192.168.39.232:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-19T19:46:40.868401Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-19T19:46:40.868617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.001011577s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-19T19:46:40.868681Z","caller":"traceutil/trace.go:171","msg":"trace[118367455] range","detail":"{range_begin:; range_end:; }","duration":"7.001134575s","start":"2024-09-19T19:46:33.867530Z","end":"2024-09-19T19:46:40.868665Z","steps":["trace[118367455] 'agreement among raft nodes before linearized reading'  (duration: 7.001009297s)"],"step_count":1}
	{"level":"error","ts":"2024-09-19T19:46:40.868744Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-19T19:46:41.635952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e [logterm: 4, index: 3556, vote: 9598478c709eb7] cast MsgPreVote for 9598478c709eb7 [logterm: 4, index: 3557] at term 4"}
	{"level":"info","ts":"2024-09-19T19:46:41.641684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e [term: 4] received a MsgVote message with higher term from 9598478c709eb7 [term: 5]"}
	{"level":"info","ts":"2024-09-19T19:46:41.641741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became follower at term 5"}
	{"level":"info","ts":"2024-09-19T19:46:41.641752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e [logterm: 4, index: 3556, vote: 0] cast MsgVote for 9598478c709eb7 [logterm: 4, index: 3557] at term 5"}
	{"level":"info","ts":"2024-09-19T19:46:41.644341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db356cbc19811e0e elected leader 9598478c709eb7 at term 5"}
	{"level":"info","ts":"2024-09-19T19:46:41.648270Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"db356cbc19811e0e","local-member-attributes":"{Name:ha-076992 ClientURLs:[https://192.168.39.173:2379]}","request-path":"/0/members/db356cbc19811e0e/attributes","cluster-id":"a25ac6d8ed10a2a9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T19:46:41.648311Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:46:41.648659Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:46:41.650154Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:46:41.651221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.173:2379"}
	{"level":"info","ts":"2024-09-19T19:46:41.651362Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T19:46:41.651413Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T19:46:41.651974Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:46:41.652708Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-19T19:46:41.655778Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-09-19T19:46:41.658241Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58798","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:58798: write: broken pipe"}
	{"level":"warn","ts":"2024-09-19T19:46:41.660557Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58806","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:58806: write: broken pipe"}
	{"level":"warn","ts":"2024-09-19T19:46:41.662854Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58820","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:58820: write: broken pipe"}
	{"level":"warn","ts":"2024-09-19T19:46:41.665287Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-09-19T19:46:41.667753Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41934","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:54:51 up 29 min,  0 users,  load average: 0.06, 0.32, 0.34
	Linux ha-076992 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e386f72e5d3798428f3219e92ee2f99216db6834829a9df02901f3fad8c6df3] <==
	I0919 19:41:10.788584       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:41:20.788207       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:41:20.788306       1 main.go:299] handling current node
	I0919 19:41:20.788334       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:41:20.788351       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:41:20.788482       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:41:20.788502       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:41:30.789295       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:41:30.789541       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:41:30.789801       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:41:30.789862       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:41:30.790055       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:41:30.790082       1 main.go:299] handling current node
	I0919 19:41:40.788247       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:41:40.788344       1 main.go:299] handling current node
	I0919 19:41:40.788358       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:41:40.788363       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:41:40.788527       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:41:40.788551       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:41:50.788832       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:41:50.788956       1 main.go:299] handling current node
	I0919 19:41:50.789030       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:41:50.789053       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:41:50.789187       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:41:50.789213       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e0f34c5e0c76fb670dbfe8fd1cab537fee4affae7a5ff1dd5acf436ba3cb668a] <==
	I0919 19:54:10.797923       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:54:20.802898       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:54:20.803066       1 main.go:299] handling current node
	I0919 19:54:20.803111       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:54:20.803121       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:54:20.803280       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:54:20.803307       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:54:30.797502       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:54:30.797648       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:54:30.797918       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:54:30.797945       1 main.go:299] handling current node
	I0919 19:54:30.798037       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:54:30.798051       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:54:40.805211       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:54:40.805289       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:54:40.805459       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:54:40.805484       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	I0919 19:54:40.805565       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:54:40.805587       1 main.go:299] handling current node
	I0919 19:54:50.803483       1 main.go:295] Handling node with IPs: map[192.168.39.173:{}]
	I0919 19:54:50.803692       1 main.go:299] handling current node
	I0919 19:54:50.803716       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0919 19:54:50.803726       1 main.go:322] Node ha-076992-m02 has CIDR [10.244.1.0/24] 
	I0919 19:54:50.803868       1 main.go:295] Handling node with IPs: map[192.168.39.157:{}]
	I0919 19:54:50.803874       1 main.go:322] Node ha-076992-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [71f3081db7aeeaa28cb8a2b7919f12fa93918b39164de4dd2d3443c379d6b87d] <==
	I0919 19:46:15.563250       1 options.go:228] external host was not specified, using 192.168.39.173
	I0919 19:46:15.565721       1 server.go:142] Version: v1.31.1
	I0919 19:46:15.565781       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:46:15.860939       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0919 19:46:15.874653       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 19:46:15.878806       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0919 19:46:15.878908       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0919 19:46:15.879216       1 instance.go:232] Using reconciler: lease
	W0919 19:46:35.861758       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0919 19:46:35.861757       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0919 19:46:35.881045       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W0919 19:46:35.881119       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [ebc8f047f0e800cd74c4ee6c30beb3fa49f8e36b8654b5367fd95246a2c5d6f8] <==
	I0919 19:47:58.255198       1 establishing_controller.go:81] Starting EstablishingController
	I0919 19:47:58.255239       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0919 19:47:58.255271       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0919 19:47:58.255311       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0919 19:47:58.361166       1 shared_informer.go:320] Caches are synced for configmaps
	I0919 19:47:58.361411       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0919 19:47:58.361449       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0919 19:47:58.361946       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0919 19:47:58.363325       1 aggregator.go:171] initial CRD sync complete...
	I0919 19:47:58.363545       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 19:47:58.363920       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 19:47:58.364098       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0919 19:47:58.376345       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0919 19:47:58.389473       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 19:47:58.389585       1 policy_source.go:224] refreshing policies
	I0919 19:47:58.451138       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 19:47:58.451312       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0919 19:47:58.453620       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 19:47:58.460145       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0919 19:47:58.465072       1 cache.go:39] Caches are synced for autoregister controller
	I0919 19:47:58.467484       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 19:47:59.265772       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 19:47:59.805608       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.173 192.168.39.232]
	I0919 19:47:59.810904       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 19:47:59.829140       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [569cc465916e3a55595915b50f72cb99c942e304f1d9fafb98d4dd24f90f6e15] <==
	I0919 19:48:01.906105       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-076992-m02"
	I0919 19:48:01.906151       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-076992-m04"
	I0919 19:48:01.906198       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 19:48:01.908923       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0919 19:48:01.950230       1 shared_informer.go:320] Caches are synced for daemon sets
	I0919 19:48:01.987597       1 shared_informer.go:320] Caches are synced for stateful set
	I0919 19:48:01.996035       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 19:48:01.998926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:48:02.037575       1 shared_informer.go:320] Caches are synced for endpoint
	I0919 19:48:02.070916       1 shared_informer.go:320] Caches are synced for attach detach
	I0919 19:48:02.089296       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0919 19:48:02.090429       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 19:48:02.526463       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 19:48:02.588551       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 19:48:02.588593       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 19:51:50.237278       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992"
	I0919 19:51:52.012829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m02"
	I0919 19:52:58.283048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076992-m04"
	I0919 19:52:58.283558       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:52:58.307615       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:52:59.110079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="2.781284ms"
	I0919 19:53:01.651960       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	I0919 19:53:05.803185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.430858ms"
	I0919 19:53:05.803285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.317µs"
	I0919 19:53:28.760663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-076992-m04"
	
	
	==> kube-controller-manager [ae6348fde1f6d938c28d9560b2606d67c26d75abb8097e420ee3d798d47865d0] <==
	I0919 19:46:14.821951       1 serving.go:386] Generated self-signed cert in-memory
	I0919 19:46:15.178382       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0919 19:46:15.178436       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:46:15.180353       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 19:46:15.180605       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 19:46:15.181162       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0919 19:46:15.181252       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0919 19:46:36.886142       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.173:8443/healthz\": dial tcp 192.168.39.173:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.173:34628->192.168.39.173:8443: read: connection reset by peer"
	
	
	==> kube-proxy [32aead7e6d36aa4cd159c437bf90339360482f9c9985298380fb3396ca7b6303] <==
	E0919 19:46:07.124070       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0919 19:46:07.124493       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:46:07.125041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:46:07.124742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:46:07.125335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:46:07.124847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:46:07.125445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:46:16.341334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:46:16.341455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0919 19:46:19.411516       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0919 19:46:19.411731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:46:19.411819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:46:22.483736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:46:22.483822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0919 19:46:31.700835       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0919 19:46:34.771847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:46:34.772162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:46:40.915915       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:46:40.916247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-076992&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0919 19:46:43.988420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0919 19:46:43.988866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0919 19:46:43.988927       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0919 19:47:09.507821       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 19:47:17.908059       1 shared_informer.go:320] Caches are synced for service config
	I0919 19:47:26.409072       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c412d5b70d043ee964d23432b66f90d26bb2be3b9d0a4f584434b02697eb5730] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 19:35:51.955971       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:35:55.029038       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:35:58.100556       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:36:04.246747       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0919 19:36:16.531671       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076992\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0919 19:36:33.434609       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.173"]
	E0919 19:36:33.442335       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:36:33.526674       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 19:36:33.527103       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 19:36:33.527381       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:36:33.533680       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:36:33.534387       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:36:33.534496       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:36:33.538133       1 config.go:199] "Starting service config controller"
	I0919 19:36:33.538362       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:36:33.541156       1 config.go:328] "Starting node config controller"
	I0919 19:36:33.543065       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:36:33.540804       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:36:33.547059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:36:33.653079       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 19:36:33.653127       1 shared_informer.go:320] Caches are synced for service config
	I0919 19:36:33.653246       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9c24c13be07e64dda2e80724703bc9b29ba428e216b991dd494bf886bf5e58e7] <==
	W0919 19:47:16.578483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.173:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:16.578597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.173:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:24.130555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.173:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:24.130640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.173:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:26.535643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.173:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:26.535717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.173:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:27.615946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.173:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:27.616140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.173:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:28.167719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.173:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:28.167819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.173:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:31.455879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.173:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:31.455953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.173:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:34.170335       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.173:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:34.170457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.173:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:37.601695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.173:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:37.601781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.173:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:38.773202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.173:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:38.773287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.173:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:43.708104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.173:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:43.708281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.173:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:49.302255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.173:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:49.302465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.173:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:47:53.367851       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.173:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:47:53.367963       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.173:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	I0919 19:48:38.193355       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cfb4ace0f3e597ba737236f8b2d73821f37c3b98501414f97261fabca9f4cb79] <==
	E0919 19:36:28.949299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.173:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.146329       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.173:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.146436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.173:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.196546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.173:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.196612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.173:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.396468       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.173:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.396513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.173:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:29.925771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.173:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:29.926086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.173:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:30.435838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.173:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:30.436058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.173:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:32.617798       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.173:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:32.617869       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.173:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:33.195606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.173:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.173:8443: connect: connection refused
	E0919 19:36:33.195731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.173:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.173:8443: connect: connection refused" logger="UnhandledError"
	W0919 19:36:35.776364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 19:36:35.776452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 19:36:48.923565       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 19:39:12.713055       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wdj7x\": pod busybox-7dff88458-wdj7x is already assigned to node \"ha-076992-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wdj7x" node="ha-076992-m04"
	E0919 19:39:12.713392       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 25081f3e-a225-4436-852b-4fe81857e092(default/busybox-7dff88458-wdj7x) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wdj7x"
	E0919 19:39:12.713493       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wdj7x\": pod busybox-7dff88458-wdj7x is already assigned to node \"ha-076992-m04\"" pod="default/busybox-7dff88458-wdj7x"
	I0919 19:39:12.713626       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wdj7x" node="ha-076992-m04"
	I0919 19:41:52.383876       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0919 19:41:52.384052       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0919 19:41:52.388440       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 19 19:54:13 ha-076992 kubelet[1304]: E0919 19:54:13.393071    1304 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\" already exists"
	Sep 19 19:54:13 ha-076992 kubelet[1304]: E0919 19:54:13.393376    1304 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\" already exists" pod="kube-system/coredns-7c65d6cfc9-nbds4"
	Sep 19 19:54:13 ha-076992 kubelet[1304]: E0919 19:54:13.393446    1304 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\" already exists" pod="kube-system/coredns-7c65d6cfc9-nbds4"
	Sep 19 19:54:13 ha-076992 kubelet[1304]: E0919 19:54:13.393524    1304 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nbds4_kube-system(89ceb0f8-a15c-405e-b0ed-d54a8bfe332f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nbds4_kube-system(89ceb0f8-a15c-405e-b0ed-d54a8bfe332f)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\\\" already exists\"" pod="kube-system/coredns-7c65d6cfc9-nbds4" podUID="89ceb0f8-a15c-405e-b0ed-d54a8bfe332f"
	Sep 19 19:54:21 ha-076992 kubelet[1304]: E0919 19:54:21.868656    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775661867788215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:54:21 ha-076992 kubelet[1304]: E0919 19:54:21.868714    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775661867788215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:54:25 ha-076992 kubelet[1304]: E0919 19:54:25.396581    1304 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\" already exists"
	Sep 19 19:54:25 ha-076992 kubelet[1304]: E0919 19:54:25.396911    1304 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\" already exists" pod="kube-system/coredns-7c65d6cfc9-nbds4"
	Sep 19 19:54:25 ha-076992 kubelet[1304]: E0919 19:54:25.396964    1304 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\" already exists" pod="kube-system/coredns-7c65d6cfc9-nbds4"
	Sep 19 19:54:25 ha-076992 kubelet[1304]: E0919 19:54:25.397114    1304 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nbds4_kube-system(89ceb0f8-a15c-405e-b0ed-d54a8bfe332f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nbds4_kube-system(89ceb0f8-a15c-405e-b0ed-d54a8bfe332f)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\\\" already exists\"" pod="kube-system/coredns-7c65d6cfc9-nbds4" podUID="89ceb0f8-a15c-405e-b0ed-d54a8bfe332f"
	Sep 19 19:54:31 ha-076992 kubelet[1304]: E0919 19:54:31.405833    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 19 19:54:31 ha-076992 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 19:54:31 ha-076992 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 19:54:31 ha-076992 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 19:54:31 ha-076992 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 19:54:31 ha-076992 kubelet[1304]: E0919 19:54:31.871240    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775671870791386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:54:31 ha-076992 kubelet[1304]: E0919 19:54:31.871303    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775671870791386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:54:40 ha-076992 kubelet[1304]: E0919 19:54:40.391730    1304 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\" already exists"
	Sep 19 19:54:40 ha-076992 kubelet[1304]: E0919 19:54:40.392077    1304 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\" already exists" pod="kube-system/coredns-7c65d6cfc9-nbds4"
	Sep 19 19:54:40 ha-076992 kubelet[1304]: E0919 19:54:40.392131    1304 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\" already exists" pod="kube-system/coredns-7c65d6cfc9-nbds4"
	Sep 19 19:54:40 ha-076992 kubelet[1304]: E0919 19:54:40.392218    1304 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nbds4_kube-system(89ceb0f8-a15c-405e-b0ed-d54a8bfe332f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nbds4_kube-system(89ceb0f8-a15c-405e-b0ed-d54a8bfe332f)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7c65d6cfc9-nbds4_kube-system_89ceb0f8-a15c-405e-b0ed-d54a8bfe332f_2\\\" already exists\"" pod="kube-system/coredns-7c65d6cfc9-nbds4" podUID="89ceb0f8-a15c-405e-b0ed-d54a8bfe332f"
	Sep 19 19:54:41 ha-076992 kubelet[1304]: E0919 19:54:41.873515    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775681873092260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:54:41 ha-076992 kubelet[1304]: E0919 19:54:41.873798    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775681873092260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:54:51 ha-076992 kubelet[1304]: E0919 19:54:51.876959    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775691876598122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:54:51 ha-076992 kubelet[1304]: E0919 19:54:51.877037    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726775691876598122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:54:50.846270   41359 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19664-7917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-076992 -n ha-076992
helpers_test.go:261: (dbg) Run:  kubectl --context ha-076992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (781.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (325.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-282812
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-282812
E0919 20:03:59.337739   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-282812: exit status 82 (2m1.873957769s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-282812-m03"  ...
	* Stopping node "multinode-282812-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-282812" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-282812 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-282812 --wait=true -v=8 --alsologtostderr: (3m21.774739784s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-282812
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-282812 -n multinode-282812
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-282812 logs -n 25: (1.480663176s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m02:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile472680244/001/cp-test_multinode-282812-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m02:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812:/home/docker/cp-test_multinode-282812-m02_multinode-282812.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n multinode-282812 sudo cat                                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /home/docker/cp-test_multinode-282812-m02_multinode-282812.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m02:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03:/home/docker/cp-test_multinode-282812-m02_multinode-282812-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n multinode-282812-m03 sudo cat                                   | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /home/docker/cp-test_multinode-282812-m02_multinode-282812-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp testdata/cp-test.txt                                                | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m03:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile472680244/001/cp-test_multinode-282812-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m03:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812:/home/docker/cp-test_multinode-282812-m03_multinode-282812.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n multinode-282812 sudo cat                                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /home/docker/cp-test_multinode-282812-m03_multinode-282812.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m03:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m02:/home/docker/cp-test_multinode-282812-m03_multinode-282812-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n multinode-282812-m02 sudo cat                                   | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /home/docker/cp-test_multinode-282812-m03_multinode-282812-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-282812 node stop m03                                                          | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	| node    | multinode-282812 node start                                                             | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:03 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-282812                                                                | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:03 UTC |                     |
	| stop    | -p multinode-282812                                                                     | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:03 UTC |                     |
	| start   | -p multinode-282812                                                                     | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:05 UTC | 19 Sep 24 20:08 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-282812                                                                | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:08 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 20:05:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 20:05:04.933210   48464 out.go:345] Setting OutFile to fd 1 ...
	I0919 20:05:04.933427   48464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:05:04.933465   48464 out.go:358] Setting ErrFile to fd 2...
	I0919 20:05:04.933481   48464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:05:04.934115   48464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 20:05:04.934706   48464 out.go:352] Setting JSON to false
	I0919 20:05:04.935691   48464 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6449,"bootTime":1726769856,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 20:05:04.935801   48464 start.go:139] virtualization: kvm guest
	I0919 20:05:04.938508   48464 out.go:177] * [multinode-282812] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 20:05:04.940089   48464 notify.go:220] Checking for updates...
	I0919 20:05:04.940140   48464 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 20:05:04.941790   48464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 20:05:04.943316   48464 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 20:05:04.944872   48464 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 20:05:04.946297   48464 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 20:05:04.947713   48464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 20:05:04.949573   48464 config.go:182] Loaded profile config "multinode-282812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 20:05:04.949666   48464 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 20:05:04.950132   48464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:05:04.950191   48464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:05:04.965413   48464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I0919 20:05:04.965928   48464 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:05:04.966450   48464 main.go:141] libmachine: Using API Version  1
	I0919 20:05:04.966470   48464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:05:04.966772   48464 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:05:04.966942   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:05:05.003385   48464 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 20:05:05.004923   48464 start.go:297] selected driver: kvm2
	I0919 20:05:05.004934   48464 start.go:901] validating driver "kvm2" against &{Name:multinode-282812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-282812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:05:05.005105   48464 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 20:05:05.005427   48464 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 20:05:05.005487   48464 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 20:05:05.020327   48464 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 20:05:05.020986   48464 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 20:05:05.021019   48464 cni.go:84] Creating CNI manager for ""
	I0919 20:05:05.021105   48464 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 20:05:05.021242   48464 start.go:340] cluster config:
	{Name:multinode-282812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-282812 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:05:05.021392   48464 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 20:05:05.024202   48464 out.go:177] * Starting "multinode-282812" primary control-plane node in "multinode-282812" cluster
	I0919 20:05:05.025617   48464 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 20:05:05.025671   48464 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 20:05:05.025679   48464 cache.go:56] Caching tarball of preloaded images
	I0919 20:05:05.025789   48464 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 20:05:05.025804   48464 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 20:05:05.025915   48464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/config.json ...
	I0919 20:05:05.026145   48464 start.go:360] acquireMachinesLock for multinode-282812: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 20:05:05.026224   48464 start.go:364] duration metric: took 59.676µs to acquireMachinesLock for "multinode-282812"
	I0919 20:05:05.026243   48464 start.go:96] Skipping create...Using existing machine configuration
	I0919 20:05:05.026250   48464 fix.go:54] fixHost starting: 
	I0919 20:05:05.026544   48464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:05:05.026584   48464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:05:05.040914   48464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0919 20:05:05.041405   48464 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:05:05.041890   48464 main.go:141] libmachine: Using API Version  1
	I0919 20:05:05.041923   48464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:05:05.042254   48464 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:05:05.042440   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:05:05.042600   48464 main.go:141] libmachine: (multinode-282812) Calling .GetState
	I0919 20:05:05.044126   48464 fix.go:112] recreateIfNeeded on multinode-282812: state=Running err=<nil>
	W0919 20:05:05.044157   48464 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 20:05:05.046083   48464 out.go:177] * Updating the running kvm2 "multinode-282812" VM ...
	I0919 20:05:05.047491   48464 machine.go:93] provisionDockerMachine start ...
	I0919 20:05:05.047508   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:05:05.047685   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.050193   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.050606   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.050658   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.050737   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:05:05.050896   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.051042   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.051169   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:05:05.051310   48464 main.go:141] libmachine: Using SSH client type: native
	I0919 20:05:05.051497   48464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0919 20:05:05.051509   48464 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 20:05:05.158128   48464 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-282812
	
	I0919 20:05:05.158159   48464 main.go:141] libmachine: (multinode-282812) Calling .GetMachineName
	I0919 20:05:05.158439   48464 buildroot.go:166] provisioning hostname "multinode-282812"
	I0919 20:05:05.158477   48464 main.go:141] libmachine: (multinode-282812) Calling .GetMachineName
	I0919 20:05:05.158646   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.161331   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.161681   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.161716   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.161853   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:05:05.162023   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.162174   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.162308   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:05:05.162452   48464 main.go:141] libmachine: Using SSH client type: native
	I0919 20:05:05.162674   48464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0919 20:05:05.162688   48464 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-282812 && echo "multinode-282812" | sudo tee /etc/hostname
	I0919 20:05:05.289376   48464 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-282812
	
	I0919 20:05:05.289403   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.291891   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.292241   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.292266   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.292441   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:05:05.292607   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.292762   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.292882   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:05:05.293025   48464 main.go:141] libmachine: Using SSH client type: native
	I0919 20:05:05.293197   48464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0919 20:05:05.293214   48464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-282812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-282812/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-282812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 20:05:05.398035   48464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 20:05:05.398063   48464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 20:05:05.398098   48464 buildroot.go:174] setting up certificates
	I0919 20:05:05.398110   48464 provision.go:84] configureAuth start
	I0919 20:05:05.398121   48464 main.go:141] libmachine: (multinode-282812) Calling .GetMachineName
	I0919 20:05:05.398364   48464 main.go:141] libmachine: (multinode-282812) Calling .GetIP
	I0919 20:05:05.400918   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.401303   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.401339   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.401490   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.403663   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.404000   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.404032   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.404175   48464 provision.go:143] copyHostCerts
	I0919 20:05:05.404211   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 20:05:05.404250   48464 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 20:05:05.404260   48464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 20:05:05.404347   48464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 20:05:05.404439   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 20:05:05.404462   48464 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 20:05:05.404471   48464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 20:05:05.404511   48464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 20:05:05.404610   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 20:05:05.404633   48464 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 20:05:05.404643   48464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 20:05:05.404675   48464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 20:05:05.404735   48464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.multinode-282812 san=[127.0.0.1 192.168.39.87 localhost minikube multinode-282812]
	I0919 20:05:05.624537   48464 provision.go:177] copyRemoteCerts
	I0919 20:05:05.624599   48464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 20:05:05.624624   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.627185   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.627633   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.627657   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.627771   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:05:05.627965   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.628111   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:05:05.628266   48464 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/multinode-282812/id_rsa Username:docker}
	I0919 20:05:05.712939   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 20:05:05.713015   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 20:05:05.737711   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 20:05:05.737792   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0919 20:05:05.765191   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 20:05:05.765271   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 20:05:05.792462   48464 provision.go:87] duration metric: took 394.339291ms to configureAuth
	I0919 20:05:05.792505   48464 buildroot.go:189] setting minikube options for container-runtime
	I0919 20:05:05.792741   48464 config.go:182] Loaded profile config "multinode-282812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 20:05:05.792822   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.795515   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.795847   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.795900   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.796064   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:05:05.796241   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.796381   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.796492   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:05:05.796616   48464 main.go:141] libmachine: Using SSH client type: native
	I0919 20:05:05.796769   48464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0919 20:05:05.796781   48464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 20:06:36.442745   48464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 20:06:36.442769   48464 machine.go:96] duration metric: took 1m31.395267139s to provisionDockerMachine
	I0919 20:06:36.442782   48464 start.go:293] postStartSetup for "multinode-282812" (driver="kvm2")
	I0919 20:06:36.442794   48464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 20:06:36.442810   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:06:36.443118   48464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 20:06:36.443155   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:06:36.446327   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.446806   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:36.446836   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.447014   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:06:36.447197   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:06:36.447340   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:06:36.447454   48464 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/multinode-282812/id_rsa Username:docker}
	I0919 20:06:36.536460   48464 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 20:06:36.541277   48464 command_runner.go:130] > NAME=Buildroot
	I0919 20:06:36.541302   48464 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0919 20:06:36.541308   48464 command_runner.go:130] > ID=buildroot
	I0919 20:06:36.541315   48464 command_runner.go:130] > VERSION_ID=2023.02.9
	I0919 20:06:36.541323   48464 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0919 20:06:36.541370   48464 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 20:06:36.541388   48464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 20:06:36.541452   48464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 20:06:36.541536   48464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 20:06:36.541549   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 20:06:36.541654   48464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 20:06:36.551091   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 20:06:36.575189   48464 start.go:296] duration metric: took 132.393548ms for postStartSetup
	I0919 20:06:36.575231   48464 fix.go:56] duration metric: took 1m31.548980366s for fixHost
	I0919 20:06:36.575255   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:06:36.578159   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.578637   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:36.578662   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.578801   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:06:36.579038   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:06:36.579198   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:06:36.579293   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:06:36.579419   48464 main.go:141] libmachine: Using SSH client type: native
	I0919 20:06:36.579629   48464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0919 20:06:36.579644   48464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 20:06:36.682025   48464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726776396.653021410
	
	I0919 20:06:36.682048   48464 fix.go:216] guest clock: 1726776396.653021410
	I0919 20:06:36.682055   48464 fix.go:229] Guest: 2024-09-19 20:06:36.65302141 +0000 UTC Remote: 2024-09-19 20:06:36.575235071 +0000 UTC m=+91.675920701 (delta=77.786339ms)
	I0919 20:06:36.682074   48464 fix.go:200] guest clock delta is within tolerance: 77.786339ms
	I0919 20:06:36.682080   48464 start.go:83] releasing machines lock for "multinode-282812", held for 1m31.655843579s
	I0919 20:06:36.682102   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:06:36.682357   48464 main.go:141] libmachine: (multinode-282812) Calling .GetIP
	I0919 20:06:36.685220   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.685559   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:36.685581   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.685823   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:06:36.686318   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:06:36.686462   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:06:36.686560   48464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 20:06:36.686608   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:06:36.686665   48464 ssh_runner.go:195] Run: cat /version.json
	I0919 20:06:36.686699   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:06:36.689288   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.689610   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:36.689646   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.689682   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.689788   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:06:36.689977   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:06:36.690028   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:36.690051   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.690128   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:06:36.690209   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:06:36.690264   48464 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/multinode-282812/id_rsa Username:docker}
	I0919 20:06:36.690327   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:06:36.690445   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:06:36.690576   48464 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/multinode-282812/id_rsa Username:docker}
	I0919 20:06:36.766617   48464 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0919 20:06:36.766750   48464 ssh_runner.go:195] Run: systemctl --version
	I0919 20:06:36.791866   48464 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0919 20:06:36.791926   48464 command_runner.go:130] > systemd 252 (252)
	I0919 20:06:36.791946   48464 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0919 20:06:36.791998   48464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 20:06:36.950306   48464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 20:06:36.957247   48464 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0919 20:06:36.957345   48464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 20:06:36.957422   48464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 20:06:36.968377   48464 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 20:06:36.968403   48464 start.go:495] detecting cgroup driver to use...
	I0919 20:06:36.968462   48464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 20:06:36.985251   48464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 20:06:36.999791   48464 docker.go:217] disabling cri-docker service (if available) ...
	I0919 20:06:36.999840   48464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 20:06:37.014146   48464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 20:06:37.028337   48464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 20:06:37.167281   48464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 20:06:37.304440   48464 docker.go:233] disabling docker service ...
	I0919 20:06:37.304501   48464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 20:06:37.321183   48464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 20:06:37.335315   48464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 20:06:37.474457   48464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 20:06:37.613514   48464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 20:06:37.627754   48464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 20:06:37.646823   48464 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0919 20:06:37.647388   48464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 20:06:37.647464   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.658169   48464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 20:06:37.658232   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.668479   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.679361   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.689764   48464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 20:06:37.700568   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.711108   48464 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.722868   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.733532   48464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 20:06:37.743352   48464 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0919 20:06:37.743441   48464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 20:06:37.753726   48464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:06:37.892610   48464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 20:06:38.098090   48464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 20:06:38.098186   48464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 20:06:38.104256   48464 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0919 20:06:38.104283   48464 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0919 20:06:38.104303   48464 command_runner.go:130] > Device: 0,22	Inode: 1313        Links: 1
	I0919 20:06:38.104313   48464 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 20:06:38.104321   48464 command_runner.go:130] > Access: 2024-09-19 20:06:38.034913323 +0000
	I0919 20:06:38.104329   48464 command_runner.go:130] > Modify: 2024-09-19 20:06:37.955911284 +0000
	I0919 20:06:38.104334   48464 command_runner.go:130] > Change: 2024-09-19 20:06:37.955911284 +0000
	I0919 20:06:38.104360   48464 command_runner.go:130] >  Birth: -
	I0919 20:06:38.104393   48464 start.go:563] Will wait 60s for crictl version
	I0919 20:06:38.104431   48464 ssh_runner.go:195] Run: which crictl
	I0919 20:06:38.108252   48464 command_runner.go:130] > /usr/bin/crictl
	I0919 20:06:38.108358   48464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 20:06:38.150443   48464 command_runner.go:130] > Version:  0.1.0
	I0919 20:06:38.150470   48464 command_runner.go:130] > RuntimeName:  cri-o
	I0919 20:06:38.150474   48464 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0919 20:06:38.150479   48464 command_runner.go:130] > RuntimeApiVersion:  v1
	I0919 20:06:38.150498   48464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 20:06:38.150563   48464 ssh_runner.go:195] Run: crio --version
	I0919 20:06:38.178865   48464 command_runner.go:130] > crio version 1.29.1
	I0919 20:06:38.178889   48464 command_runner.go:130] > Version:        1.29.1
	I0919 20:06:38.178895   48464 command_runner.go:130] > GitCommit:      unknown
	I0919 20:06:38.178899   48464 command_runner.go:130] > GitCommitDate:  unknown
	I0919 20:06:38.178903   48464 command_runner.go:130] > GitTreeState:   clean
	I0919 20:06:38.178908   48464 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0919 20:06:38.178912   48464 command_runner.go:130] > GoVersion:      go1.21.6
	I0919 20:06:38.178918   48464 command_runner.go:130] > Compiler:       gc
	I0919 20:06:38.178950   48464 command_runner.go:130] > Platform:       linux/amd64
	I0919 20:06:38.178957   48464 command_runner.go:130] > Linkmode:       dynamic
	I0919 20:06:38.178966   48464 command_runner.go:130] > BuildTags:      
	I0919 20:06:38.178970   48464 command_runner.go:130] >   containers_image_ostree_stub
	I0919 20:06:38.178974   48464 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0919 20:06:38.178978   48464 command_runner.go:130] >   btrfs_noversion
	I0919 20:06:38.178982   48464 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0919 20:06:38.178987   48464 command_runner.go:130] >   libdm_no_deferred_remove
	I0919 20:06:38.178991   48464 command_runner.go:130] >   seccomp
	I0919 20:06:38.178995   48464 command_runner.go:130] > LDFlags:          unknown
	I0919 20:06:38.178999   48464 command_runner.go:130] > SeccompEnabled:   true
	I0919 20:06:38.179004   48464 command_runner.go:130] > AppArmorEnabled:  false
	I0919 20:06:38.180289   48464 ssh_runner.go:195] Run: crio --version
	I0919 20:06:38.207944   48464 command_runner.go:130] > crio version 1.29.1
	I0919 20:06:38.207966   48464 command_runner.go:130] > Version:        1.29.1
	I0919 20:06:38.207972   48464 command_runner.go:130] > GitCommit:      unknown
	I0919 20:06:38.207976   48464 command_runner.go:130] > GitCommitDate:  unknown
	I0919 20:06:38.207979   48464 command_runner.go:130] > GitTreeState:   clean
	I0919 20:06:38.207985   48464 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0919 20:06:38.207989   48464 command_runner.go:130] > GoVersion:      go1.21.6
	I0919 20:06:38.207993   48464 command_runner.go:130] > Compiler:       gc
	I0919 20:06:38.207997   48464 command_runner.go:130] > Platform:       linux/amd64
	I0919 20:06:38.208001   48464 command_runner.go:130] > Linkmode:       dynamic
	I0919 20:06:38.208005   48464 command_runner.go:130] > BuildTags:      
	I0919 20:06:38.208009   48464 command_runner.go:130] >   containers_image_ostree_stub
	I0919 20:06:38.208013   48464 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0919 20:06:38.208017   48464 command_runner.go:130] >   btrfs_noversion
	I0919 20:06:38.208021   48464 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0919 20:06:38.208025   48464 command_runner.go:130] >   libdm_no_deferred_remove
	I0919 20:06:38.208035   48464 command_runner.go:130] >   seccomp
	I0919 20:06:38.208039   48464 command_runner.go:130] > LDFlags:          unknown
	I0919 20:06:38.208043   48464 command_runner.go:130] > SeccompEnabled:   true
	I0919 20:06:38.208047   48464 command_runner.go:130] > AppArmorEnabled:  false
	I0919 20:06:38.211072   48464 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 20:06:38.212515   48464 main.go:141] libmachine: (multinode-282812) Calling .GetIP
	I0919 20:06:38.215101   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:38.215389   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:38.215415   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:38.215611   48464 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 20:06:38.220063   48464 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0919 20:06:38.220151   48464 kubeadm.go:883] updating cluster {Name:multinode-282812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-282812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 20:06:38.220385   48464 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 20:06:38.220472   48464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:06:38.260376   48464 command_runner.go:130] > {
	I0919 20:06:38.260404   48464 command_runner.go:130] >   "images": [
	I0919 20:06:38.260408   48464 command_runner.go:130] >     {
	I0919 20:06:38.260415   48464 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0919 20:06:38.260420   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.260426   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0919 20:06:38.260436   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260441   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.260452   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0919 20:06:38.260473   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0919 20:06:38.260480   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260487   48464 command_runner.go:130] >       "size": "87190579",
	I0919 20:06:38.260493   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.260497   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.260502   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.260509   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.260512   48464 command_runner.go:130] >     },
	I0919 20:06:38.260515   48464 command_runner.go:130] >     {
	I0919 20:06:38.260523   48464 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0919 20:06:38.260527   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.260534   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0919 20:06:38.260545   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260557   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.260568   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0919 20:06:38.260582   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0919 20:06:38.260589   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260599   48464 command_runner.go:130] >       "size": "1363676",
	I0919 20:06:38.260603   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.260612   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.260616   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.260621   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.260629   48464 command_runner.go:130] >     },
	I0919 20:06:38.260636   48464 command_runner.go:130] >     {
	I0919 20:06:38.260649   48464 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0919 20:06:38.260659   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.260669   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0919 20:06:38.260678   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260685   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.260697   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0919 20:06:38.260706   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0919 20:06:38.260714   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260721   48464 command_runner.go:130] >       "size": "31470524",
	I0919 20:06:38.260731   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.260738   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.260747   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.260753   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.260761   48464 command_runner.go:130] >     },
	I0919 20:06:38.260767   48464 command_runner.go:130] >     {
	I0919 20:06:38.260779   48464 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0919 20:06:38.260784   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.260789   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0919 20:06:38.260798   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260805   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.260819   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0919 20:06:38.260847   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0919 20:06:38.260865   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260871   48464 command_runner.go:130] >       "size": "63273227",
	I0919 20:06:38.260875   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.260883   48464 command_runner.go:130] >       "username": "nonroot",
	I0919 20:06:38.260890   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.260900   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.260909   48464 command_runner.go:130] >     },
	I0919 20:06:38.260914   48464 command_runner.go:130] >     {
	I0919 20:06:38.260927   48464 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0919 20:06:38.260936   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.260947   48464 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0919 20:06:38.260954   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260958   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.260970   48464 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0919 20:06:38.260984   48464 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0919 20:06:38.260993   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261001   48464 command_runner.go:130] >       "size": "149009664",
	I0919 20:06:38.261009   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.261018   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.261026   48464 command_runner.go:130] >       },
	I0919 20:06:38.261032   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261040   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261043   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.261048   48464 command_runner.go:130] >     },
	I0919 20:06:38.261056   48464 command_runner.go:130] >     {
	I0919 20:06:38.261075   48464 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0919 20:06:38.261085   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.261103   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0919 20:06:38.261112   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261122   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.261130   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0919 20:06:38.261144   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0919 20:06:38.261157   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261165   48464 command_runner.go:130] >       "size": "95237600",
	I0919 20:06:38.261174   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.261180   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.261188   48464 command_runner.go:130] >       },
	I0919 20:06:38.261195   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261204   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261210   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.261216   48464 command_runner.go:130] >     },
	I0919 20:06:38.261221   48464 command_runner.go:130] >     {
	I0919 20:06:38.261233   48464 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0919 20:06:38.261243   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.261253   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0919 20:06:38.261261   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261268   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.261282   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0919 20:06:38.261296   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0919 20:06:38.261302   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261307   48464 command_runner.go:130] >       "size": "89437508",
	I0919 20:06:38.261316   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.261323   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.261331   48464 command_runner.go:130] >       },
	I0919 20:06:38.261337   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261347   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261354   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.261363   48464 command_runner.go:130] >     },
	I0919 20:06:38.261368   48464 command_runner.go:130] >     {
	I0919 20:06:38.261380   48464 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0919 20:06:38.261388   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.261395   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0919 20:06:38.261404   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261412   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.261443   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0919 20:06:38.261466   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0919 20:06:38.261472   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261479   48464 command_runner.go:130] >       "size": "92733849",
	I0919 20:06:38.261489   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.261498   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261504   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261514   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.261520   48464 command_runner.go:130] >     },
	I0919 20:06:38.261525   48464 command_runner.go:130] >     {
	I0919 20:06:38.261535   48464 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0919 20:06:38.261541   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.261549   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0919 20:06:38.261552   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261556   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.261566   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0919 20:06:38.261578   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0919 20:06:38.261583   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261590   48464 command_runner.go:130] >       "size": "68420934",
	I0919 20:06:38.261596   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.261602   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.261607   48464 command_runner.go:130] >       },
	I0919 20:06:38.261614   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261620   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261625   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.261630   48464 command_runner.go:130] >     },
	I0919 20:06:38.261636   48464 command_runner.go:130] >     {
	I0919 20:06:38.261644   48464 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0919 20:06:38.261653   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.261660   48464 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0919 20:06:38.261669   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261676   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.261686   48464 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0919 20:06:38.261700   48464 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0919 20:06:38.261714   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261723   48464 command_runner.go:130] >       "size": "742080",
	I0919 20:06:38.261727   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.261732   48464 command_runner.go:130] >         "value": "65535"
	I0919 20:06:38.261736   48464 command_runner.go:130] >       },
	I0919 20:06:38.261746   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261753   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261763   48464 command_runner.go:130] >       "pinned": true
	I0919 20:06:38.261770   48464 command_runner.go:130] >     }
	I0919 20:06:38.261776   48464 command_runner.go:130] >   ]
	I0919 20:06:38.261784   48464 command_runner.go:130] > }
	I0919 20:06:38.262034   48464 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 20:06:38.262052   48464 crio.go:433] Images already preloaded, skipping extraction
	I0919 20:06:38.262128   48464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:06:38.295069   48464 command_runner.go:130] > {
	I0919 20:06:38.295098   48464 command_runner.go:130] >   "images": [
	I0919 20:06:38.295110   48464 command_runner.go:130] >     {
	I0919 20:06:38.295118   48464 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0919 20:06:38.295123   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295128   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0919 20:06:38.295132   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295136   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295144   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0919 20:06:38.295150   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0919 20:06:38.295154   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295161   48464 command_runner.go:130] >       "size": "87190579",
	I0919 20:06:38.295168   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.295174   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.295185   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295199   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295208   48464 command_runner.go:130] >     },
	I0919 20:06:38.295212   48464 command_runner.go:130] >     {
	I0919 20:06:38.295218   48464 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0919 20:06:38.295222   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295228   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0919 20:06:38.295231   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295236   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295244   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0919 20:06:38.295258   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0919 20:06:38.295268   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295273   48464 command_runner.go:130] >       "size": "1363676",
	I0919 20:06:38.295281   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.295293   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.295302   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295308   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295313   48464 command_runner.go:130] >     },
	I0919 20:06:38.295316   48464 command_runner.go:130] >     {
	I0919 20:06:38.295322   48464 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0919 20:06:38.295326   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295334   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0919 20:06:38.295343   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295349   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295364   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0919 20:06:38.295376   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0919 20:06:38.295385   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295394   48464 command_runner.go:130] >       "size": "31470524",
	I0919 20:06:38.295401   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.295407   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.295411   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295418   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295426   48464 command_runner.go:130] >     },
	I0919 20:06:38.295435   48464 command_runner.go:130] >     {
	I0919 20:06:38.295451   48464 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0919 20:06:38.295471   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295479   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0919 20:06:38.295487   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295494   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295504   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0919 20:06:38.295526   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0919 20:06:38.295535   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295544   48464 command_runner.go:130] >       "size": "63273227",
	I0919 20:06:38.295553   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.295564   48464 command_runner.go:130] >       "username": "nonroot",
	I0919 20:06:38.295573   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295579   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295583   48464 command_runner.go:130] >     },
	I0919 20:06:38.295591   48464 command_runner.go:130] >     {
	I0919 20:06:38.295602   48464 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0919 20:06:38.295611   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295619   48464 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0919 20:06:38.295628   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295634   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295648   48464 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0919 20:06:38.295665   48464 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0919 20:06:38.295672   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295679   48464 command_runner.go:130] >       "size": "149009664",
	I0919 20:06:38.295689   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.295696   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.295704   48464 command_runner.go:130] >       },
	I0919 20:06:38.295710   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.295719   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295725   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295733   48464 command_runner.go:130] >     },
	I0919 20:06:38.295738   48464 command_runner.go:130] >     {
	I0919 20:06:38.295748   48464 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0919 20:06:38.295763   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295775   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0919 20:06:38.295783   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295791   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295805   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0919 20:06:38.295819   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0919 20:06:38.295828   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295835   48464 command_runner.go:130] >       "size": "95237600",
	I0919 20:06:38.295844   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.295853   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.295861   48464 command_runner.go:130] >       },
	I0919 20:06:38.295871   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.295879   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295889   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295896   48464 command_runner.go:130] >     },
	I0919 20:06:38.295903   48464 command_runner.go:130] >     {
	I0919 20:06:38.295912   48464 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0919 20:06:38.295922   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295934   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0919 20:06:38.295942   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295951   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295969   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0919 20:06:38.295983   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0919 20:06:38.295995   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296005   48464 command_runner.go:130] >       "size": "89437508",
	I0919 20:06:38.296011   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.296021   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.296027   48464 command_runner.go:130] >       },
	I0919 20:06:38.296036   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.296042   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.296051   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.296056   48464 command_runner.go:130] >     },
	I0919 20:06:38.296062   48464 command_runner.go:130] >     {
	I0919 20:06:38.296082   48464 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0919 20:06:38.296092   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.296100   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0919 20:06:38.296108   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296115   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.296142   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0919 20:06:38.296156   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0919 20:06:38.296165   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296175   48464 command_runner.go:130] >       "size": "92733849",
	I0919 20:06:38.296183   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.296192   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.296199   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.296208   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.296216   48464 command_runner.go:130] >     },
	I0919 20:06:38.296221   48464 command_runner.go:130] >     {
	I0919 20:06:38.296227   48464 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0919 20:06:38.296235   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.296246   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0919 20:06:38.296255   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296261   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.296276   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0919 20:06:38.296289   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0919 20:06:38.296297   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296303   48464 command_runner.go:130] >       "size": "68420934",
	I0919 20:06:38.296309   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.296315   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.296323   48464 command_runner.go:130] >       },
	I0919 20:06:38.296332   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.296342   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.296348   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.296355   48464 command_runner.go:130] >     },
	I0919 20:06:38.296361   48464 command_runner.go:130] >     {
	I0919 20:06:38.296373   48464 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0919 20:06:38.296389   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.296396   48464 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0919 20:06:38.296400   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296409   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.296423   48464 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0919 20:06:38.296441   48464 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0919 20:06:38.296449   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296460   48464 command_runner.go:130] >       "size": "742080",
	I0919 20:06:38.296469   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.296477   48464 command_runner.go:130] >         "value": "65535"
	I0919 20:06:38.296482   48464 command_runner.go:130] >       },
	I0919 20:06:38.296489   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.296497   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.296507   48464 command_runner.go:130] >       "pinned": true
	I0919 20:06:38.296515   48464 command_runner.go:130] >     }
	I0919 20:06:38.296521   48464 command_runner.go:130] >   ]
	I0919 20:06:38.296529   48464 command_runner.go:130] > }
	I0919 20:06:38.296683   48464 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 20:06:38.296697   48464 cache_images.go:84] Images are preloaded, skipping loading
	I0919 20:06:38.296706   48464 kubeadm.go:934] updating node { 192.168.39.87 8443 v1.31.1 crio true true} ...
	I0919 20:06:38.296823   48464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-282812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-282812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 20:06:38.296900   48464 ssh_runner.go:195] Run: crio config
	I0919 20:06:38.338915   48464 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0919 20:06:38.338948   48464 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0919 20:06:38.338966   48464 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0919 20:06:38.338979   48464 command_runner.go:130] > #
	I0919 20:06:38.338991   48464 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0919 20:06:38.338997   48464 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0919 20:06:38.339006   48464 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0919 20:06:38.339015   48464 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0919 20:06:38.339020   48464 command_runner.go:130] > # reload'.
	I0919 20:06:38.339030   48464 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0919 20:06:38.339043   48464 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0919 20:06:38.339053   48464 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0919 20:06:38.339065   48464 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0919 20:06:38.339072   48464 command_runner.go:130] > [crio]
	I0919 20:06:38.339081   48464 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0919 20:06:38.339088   48464 command_runner.go:130] > # containers images, in this directory.
	I0919 20:06:38.339094   48464 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0919 20:06:38.339106   48464 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0919 20:06:38.339182   48464 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0919 20:06:38.339200   48464 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0919 20:06:38.339365   48464 command_runner.go:130] > # imagestore = ""
	I0919 20:06:38.339376   48464 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0919 20:06:38.339382   48464 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0919 20:06:38.339476   48464 command_runner.go:130] > storage_driver = "overlay"
	I0919 20:06:38.339490   48464 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0919 20:06:38.339499   48464 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0919 20:06:38.339506   48464 command_runner.go:130] > storage_option = [
	I0919 20:06:38.339643   48464 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0919 20:06:38.339667   48464 command_runner.go:130] > ]
	I0919 20:06:38.339678   48464 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0919 20:06:38.339691   48464 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0919 20:06:38.340017   48464 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0919 20:06:38.340033   48464 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0919 20:06:38.340043   48464 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0919 20:06:38.340051   48464 command_runner.go:130] > # always happen on a node reboot
	I0919 20:06:38.340298   48464 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0919 20:06:38.340326   48464 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0919 20:06:38.340336   48464 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0919 20:06:38.340347   48464 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0919 20:06:38.340491   48464 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0919 20:06:38.340506   48464 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0919 20:06:38.340519   48464 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0919 20:06:38.340875   48464 command_runner.go:130] > # internal_wipe = true
	I0919 20:06:38.340889   48464 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0919 20:06:38.340898   48464 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0919 20:06:38.341212   48464 command_runner.go:130] > # internal_repair = false
	I0919 20:06:38.341228   48464 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0919 20:06:38.341238   48464 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0919 20:06:38.341251   48464 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0919 20:06:38.341465   48464 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0919 20:06:38.341475   48464 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0919 20:06:38.341479   48464 command_runner.go:130] > [crio.api]
	I0919 20:06:38.341492   48464 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0919 20:06:38.341771   48464 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0919 20:06:38.341786   48464 command_runner.go:130] > # IP address on which the stream server will listen.
	I0919 20:06:38.342118   48464 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0919 20:06:38.342126   48464 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0919 20:06:38.342132   48464 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0919 20:06:38.342386   48464 command_runner.go:130] > # stream_port = "0"
	I0919 20:06:38.342394   48464 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0919 20:06:38.342636   48464 command_runner.go:130] > # stream_enable_tls = false
	I0919 20:06:38.342645   48464 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0919 20:06:38.342915   48464 command_runner.go:130] > # stream_idle_timeout = ""
	I0919 20:06:38.342924   48464 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0919 20:06:38.342930   48464 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0919 20:06:38.342933   48464 command_runner.go:130] > # minutes.
	I0919 20:06:38.343166   48464 command_runner.go:130] > # stream_tls_cert = ""
	I0919 20:06:38.343175   48464 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0919 20:06:38.343181   48464 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0919 20:06:38.343393   48464 command_runner.go:130] > # stream_tls_key = ""
	I0919 20:06:38.343402   48464 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0919 20:06:38.343408   48464 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0919 20:06:38.343425   48464 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0919 20:06:38.343629   48464 command_runner.go:130] > # stream_tls_ca = ""
	I0919 20:06:38.343640   48464 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0919 20:06:38.343847   48464 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0919 20:06:38.343857   48464 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0919 20:06:38.343982   48464 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0919 20:06:38.343991   48464 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0919 20:06:38.343996   48464 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0919 20:06:38.344000   48464 command_runner.go:130] > [crio.runtime]
	I0919 20:06:38.344005   48464 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0919 20:06:38.344013   48464 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0919 20:06:38.344017   48464 command_runner.go:130] > # "nofile=1024:2048"
	I0919 20:06:38.344024   48464 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0919 20:06:38.344187   48464 command_runner.go:130] > # default_ulimits = [
	I0919 20:06:38.344312   48464 command_runner.go:130] > # ]
	I0919 20:06:38.344320   48464 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0919 20:06:38.344598   48464 command_runner.go:130] > # no_pivot = false
	I0919 20:06:38.344607   48464 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0919 20:06:38.344613   48464 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0919 20:06:38.344913   48464 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0919 20:06:38.344928   48464 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0919 20:06:38.344933   48464 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0919 20:06:38.344939   48464 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 20:06:38.345044   48464 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0919 20:06:38.345057   48464 command_runner.go:130] > # Cgroup setting for conmon
	I0919 20:06:38.345075   48464 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0919 20:06:38.345268   48464 command_runner.go:130] > conmon_cgroup = "pod"
	I0919 20:06:38.345286   48464 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0919 20:06:38.345296   48464 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0919 20:06:38.345309   48464 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 20:06:38.345318   48464 command_runner.go:130] > conmon_env = [
	I0919 20:06:38.345393   48464 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0919 20:06:38.345482   48464 command_runner.go:130] > ]
	I0919 20:06:38.345495   48464 command_runner.go:130] > # Additional environment variables to set for all the
	I0919 20:06:38.345503   48464 command_runner.go:130] > # containers. These are overridden if set in the
	I0919 20:06:38.345515   48464 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0919 20:06:38.345614   48464 command_runner.go:130] > # default_env = [
	I0919 20:06:38.345863   48464 command_runner.go:130] > # ]
	I0919 20:06:38.345873   48464 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0919 20:06:38.345880   48464 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0919 20:06:38.346232   48464 command_runner.go:130] > # selinux = false
	I0919 20:06:38.346241   48464 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0919 20:06:38.346247   48464 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0919 20:06:38.346252   48464 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0919 20:06:38.346474   48464 command_runner.go:130] > # seccomp_profile = ""
	I0919 20:06:38.346483   48464 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0919 20:06:38.346489   48464 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0919 20:06:38.346498   48464 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0919 20:06:38.346505   48464 command_runner.go:130] > # which might increase security.
	I0919 20:06:38.346510   48464 command_runner.go:130] > # This option is currently deprecated,
	I0919 20:06:38.346518   48464 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0919 20:06:38.346611   48464 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0919 20:06:38.346619   48464 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0919 20:06:38.346625   48464 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0919 20:06:38.346631   48464 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0919 20:06:38.346637   48464 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0919 20:06:38.346644   48464 command_runner.go:130] > # This option supports live configuration reload.
	I0919 20:06:38.346986   48464 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0919 20:06:38.346994   48464 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0919 20:06:38.346998   48464 command_runner.go:130] > # the cgroup blockio controller.
	I0919 20:06:38.347261   48464 command_runner.go:130] > # blockio_config_file = ""
	I0919 20:06:38.347281   48464 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0919 20:06:38.347289   48464 command_runner.go:130] > # blockio parameters.
	I0919 20:06:38.347929   48464 command_runner.go:130] > # blockio_reload = false
	I0919 20:06:38.347944   48464 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0919 20:06:38.347948   48464 command_runner.go:130] > # irqbalance daemon.
	I0919 20:06:38.347953   48464 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0919 20:06:38.347959   48464 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0919 20:06:38.347965   48464 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0919 20:06:38.348048   48464 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0919 20:06:38.348184   48464 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0919 20:06:38.348199   48464 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0919 20:06:38.348208   48464 command_runner.go:130] > # This option supports live configuration reload.
	I0919 20:06:38.348215   48464 command_runner.go:130] > # rdt_config_file = ""
	I0919 20:06:38.348233   48464 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0919 20:06:38.348242   48464 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0919 20:06:38.348292   48464 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0919 20:06:38.348304   48464 command_runner.go:130] > # separate_pull_cgroup = ""
	I0919 20:06:38.348323   48464 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0919 20:06:38.348361   48464 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0919 20:06:38.348382   48464 command_runner.go:130] > # will be added.
	I0919 20:06:38.348390   48464 command_runner.go:130] > # default_capabilities = [
	I0919 20:06:38.348401   48464 command_runner.go:130] > # 	"CHOWN",
	I0919 20:06:38.348409   48464 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0919 20:06:38.348416   48464 command_runner.go:130] > # 	"FSETID",
	I0919 20:06:38.348422   48464 command_runner.go:130] > # 	"FOWNER",
	I0919 20:06:38.348434   48464 command_runner.go:130] > # 	"SETGID",
	I0919 20:06:38.348439   48464 command_runner.go:130] > # 	"SETUID",
	I0919 20:06:38.348445   48464 command_runner.go:130] > # 	"SETPCAP",
	I0919 20:06:38.348451   48464 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0919 20:06:38.348457   48464 command_runner.go:130] > # 	"KILL",
	I0919 20:06:38.348462   48464 command_runner.go:130] > # ]
	I0919 20:06:38.348479   48464 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0919 20:06:38.348489   48464 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0919 20:06:38.348498   48464 command_runner.go:130] > # add_inheritable_capabilities = false
	I0919 20:06:38.348513   48464 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0919 20:06:38.348523   48464 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 20:06:38.348529   48464 command_runner.go:130] > default_sysctls = [
	I0919 20:06:38.348547   48464 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0919 20:06:38.348553   48464 command_runner.go:130] > ]
	I0919 20:06:38.348560   48464 command_runner.go:130] > # List of devices on the host that a
	I0919 20:06:38.348571   48464 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0919 20:06:38.348578   48464 command_runner.go:130] > # allowed_devices = [
	I0919 20:06:38.348597   48464 command_runner.go:130] > # 	"/dev/fuse",
	I0919 20:06:38.348606   48464 command_runner.go:130] > # ]
	I0919 20:06:38.348614   48464 command_runner.go:130] > # List of additional devices. specified as
	I0919 20:06:38.348626   48464 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0919 20:06:38.348641   48464 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0919 20:06:38.348650   48464 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 20:06:38.348657   48464 command_runner.go:130] > # additional_devices = [
	I0919 20:06:38.348662   48464 command_runner.go:130] > # ]
	I0919 20:06:38.348676   48464 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0919 20:06:38.348682   48464 command_runner.go:130] > # cdi_spec_dirs = [
	I0919 20:06:38.348688   48464 command_runner.go:130] > # 	"/etc/cdi",
	I0919 20:06:38.348694   48464 command_runner.go:130] > # 	"/var/run/cdi",
	I0919 20:06:38.348700   48464 command_runner.go:130] > # ]
	I0919 20:06:38.348715   48464 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0919 20:06:38.348725   48464 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0919 20:06:38.348732   48464 command_runner.go:130] > # Defaults to false.
	I0919 20:06:38.348740   48464 command_runner.go:130] > # device_ownership_from_security_context = false
	I0919 20:06:38.348755   48464 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0919 20:06:38.348764   48464 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0919 20:06:38.348770   48464 command_runner.go:130] > # hooks_dir = [
	I0919 20:06:38.348783   48464 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0919 20:06:38.348790   48464 command_runner.go:130] > # ]
	I0919 20:06:38.348799   48464 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0919 20:06:38.348814   48464 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0919 20:06:38.348821   48464 command_runner.go:130] > # its default mounts from the following two files:
	I0919 20:06:38.348827   48464 command_runner.go:130] > #
	I0919 20:06:38.348836   48464 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0919 20:06:38.348932   48464 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0919 20:06:38.348959   48464 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0919 20:06:38.348965   48464 command_runner.go:130] > #
	I0919 20:06:38.348982   48464 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0919 20:06:38.348992   48464 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0919 20:06:38.349002   48464 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0919 20:06:38.349017   48464 command_runner.go:130] > #      only add mounts it finds in this file.
	I0919 20:06:38.349022   48464 command_runner.go:130] > #
	I0919 20:06:38.349031   48464 command_runner.go:130] > # default_mounts_file = ""
	I0919 20:06:38.349040   48464 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0919 20:06:38.349121   48464 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0919 20:06:38.349138   48464 command_runner.go:130] > pids_limit = 1024
	I0919 20:06:38.349148   48464 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0919 20:06:38.349162   48464 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0919 20:06:38.349176   48464 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0919 20:06:38.349197   48464 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0919 20:06:38.349210   48464 command_runner.go:130] > # log_size_max = -1
	I0919 20:06:38.349221   48464 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0919 20:06:38.349230   48464 command_runner.go:130] > # log_to_journald = false
	I0919 20:06:38.349245   48464 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0919 20:06:38.349257   48464 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0919 20:06:38.349265   48464 command_runner.go:130] > # Path to directory for container attach sockets.
	I0919 20:06:38.349278   48464 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0919 20:06:38.349286   48464 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0919 20:06:38.349292   48464 command_runner.go:130] > # bind_mount_prefix = ""
	I0919 20:06:38.349301   48464 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0919 20:06:38.349312   48464 command_runner.go:130] > # read_only = false
	I0919 20:06:38.349322   48464 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0919 20:06:38.349331   48464 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0919 20:06:38.349353   48464 command_runner.go:130] > # live configuration reload.
	I0919 20:06:38.349361   48464 command_runner.go:130] > # log_level = "info"
	I0919 20:06:38.349370   48464 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0919 20:06:38.349378   48464 command_runner.go:130] > # This option supports live configuration reload.
	I0919 20:06:38.349390   48464 command_runner.go:130] > # log_filter = ""
	I0919 20:06:38.349404   48464 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0919 20:06:38.349414   48464 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0919 20:06:38.349419   48464 command_runner.go:130] > # separated by comma.
	I0919 20:06:38.349436   48464 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0919 20:06:38.349442   48464 command_runner.go:130] > # uid_mappings = ""
	I0919 20:06:38.349451   48464 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0919 20:06:38.349465   48464 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0919 20:06:38.349471   48464 command_runner.go:130] > # separated by comma.
	I0919 20:06:38.349489   48464 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0919 20:06:38.349494   48464 command_runner.go:130] > # gid_mappings = ""
	I0919 20:06:38.349503   48464 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0919 20:06:38.349513   48464 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 20:06:38.349529   48464 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 20:06:38.349541   48464 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0919 20:06:38.349553   48464 command_runner.go:130] > # minimum_mappable_uid = -1
	I0919 20:06:38.349562   48464 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0919 20:06:38.349572   48464 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 20:06:38.349585   48464 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 20:06:38.349596   48464 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0919 20:06:38.349602   48464 command_runner.go:130] > # minimum_mappable_gid = -1
	I0919 20:06:38.349658   48464 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0919 20:06:38.349671   48464 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0919 20:06:38.349681   48464 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0919 20:06:38.349688   48464 command_runner.go:130] > # ctr_stop_timeout = 30
	I0919 20:06:38.349701   48464 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0919 20:06:38.349716   48464 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0919 20:06:38.349725   48464 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0919 20:06:38.349738   48464 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0919 20:06:38.349744   48464 command_runner.go:130] > drop_infra_ctr = false
	I0919 20:06:38.349750   48464 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0919 20:06:38.349758   48464 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0919 20:06:38.349769   48464 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0919 20:06:38.349777   48464 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0919 20:06:38.349787   48464 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0919 20:06:38.349793   48464 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0919 20:06:38.349798   48464 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0919 20:06:38.349805   48464 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0919 20:06:38.349809   48464 command_runner.go:130] > # shared_cpuset = ""
	I0919 20:06:38.349814   48464 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0919 20:06:38.349819   48464 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0919 20:06:38.349825   48464 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0919 20:06:38.349833   48464 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0919 20:06:38.349841   48464 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0919 20:06:38.349849   48464 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0919 20:06:38.349856   48464 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0919 20:06:38.349860   48464 command_runner.go:130] > # enable_criu_support = false
	I0919 20:06:38.349867   48464 command_runner.go:130] > # Enable/disable the generation of the container,
	I0919 20:06:38.349875   48464 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0919 20:06:38.349879   48464 command_runner.go:130] > # enable_pod_events = false
	I0919 20:06:38.349888   48464 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0919 20:06:38.349894   48464 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0919 20:06:38.349899   48464 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0919 20:06:38.349903   48464 command_runner.go:130] > # default_runtime = "runc"
	I0919 20:06:38.349910   48464 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0919 20:06:38.349917   48464 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0919 20:06:38.349931   48464 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0919 20:06:38.349939   48464 command_runner.go:130] > # creation as a file is not desired either.
	I0919 20:06:38.349951   48464 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0919 20:06:38.349955   48464 command_runner.go:130] > # the hostname is being managed dynamically.
	I0919 20:06:38.349960   48464 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0919 20:06:38.349963   48464 command_runner.go:130] > # ]
	I0919 20:06:38.349971   48464 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0919 20:06:38.349977   48464 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0919 20:06:38.349983   48464 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0919 20:06:38.349990   48464 command_runner.go:130] > # Each entry in the table should follow the format:
	I0919 20:06:38.349998   48464 command_runner.go:130] > #
	I0919 20:06:38.350002   48464 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0919 20:06:38.350007   48464 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0919 20:06:38.350052   48464 command_runner.go:130] > # runtime_type = "oci"
	I0919 20:06:38.350059   48464 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0919 20:06:38.350064   48464 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0919 20:06:38.350068   48464 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0919 20:06:38.350073   48464 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0919 20:06:38.350079   48464 command_runner.go:130] > # monitor_env = []
	I0919 20:06:38.350083   48464 command_runner.go:130] > # privileged_without_host_devices = false
	I0919 20:06:38.350088   48464 command_runner.go:130] > # allowed_annotations = []
	I0919 20:06:38.350094   48464 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0919 20:06:38.350097   48464 command_runner.go:130] > # Where:
	I0919 20:06:38.350105   48464 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0919 20:06:38.350110   48464 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0919 20:06:38.350119   48464 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0919 20:06:38.350125   48464 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0919 20:06:38.350128   48464 command_runner.go:130] > #   in $PATH.
	I0919 20:06:38.350134   48464 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0919 20:06:38.350141   48464 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0919 20:06:38.350150   48464 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0919 20:06:38.350153   48464 command_runner.go:130] > #   state.
	I0919 20:06:38.350162   48464 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0919 20:06:38.350168   48464 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0919 20:06:38.350174   48464 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0919 20:06:38.350187   48464 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0919 20:06:38.350193   48464 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0919 20:06:38.350202   48464 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0919 20:06:38.350208   48464 command_runner.go:130] > #   The currently recognized values are:
	I0919 20:06:38.350214   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0919 20:06:38.350223   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0919 20:06:38.350229   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0919 20:06:38.350235   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0919 20:06:38.350249   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0919 20:06:38.350255   48464 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0919 20:06:38.350264   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0919 20:06:38.350270   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0919 20:06:38.350278   48464 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0919 20:06:38.350284   48464 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0919 20:06:38.350288   48464 command_runner.go:130] > #   deprecated option "conmon".
	I0919 20:06:38.350297   48464 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0919 20:06:38.350302   48464 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0919 20:06:38.350308   48464 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0919 20:06:38.350315   48464 command_runner.go:130] > #   should be moved to the container's cgroup
	I0919 20:06:38.350321   48464 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0919 20:06:38.350326   48464 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0919 20:06:38.350335   48464 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0919 20:06:38.350347   48464 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0919 20:06:38.350350   48464 command_runner.go:130] > #
	I0919 20:06:38.350359   48464 command_runner.go:130] > # Using the seccomp notifier feature:
	I0919 20:06:38.350363   48464 command_runner.go:130] > #
	I0919 20:06:38.350371   48464 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0919 20:06:38.350377   48464 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0919 20:06:38.350382   48464 command_runner.go:130] > #
	I0919 20:06:38.350391   48464 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0919 20:06:38.350397   48464 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0919 20:06:38.350399   48464 command_runner.go:130] > #
	I0919 20:06:38.350408   48464 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0919 20:06:38.350411   48464 command_runner.go:130] > # feature.
	I0919 20:06:38.350414   48464 command_runner.go:130] > #
	I0919 20:06:38.350422   48464 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0919 20:06:38.350430   48464 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0919 20:06:38.350437   48464 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0919 20:06:38.350443   48464 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0919 20:06:38.350451   48464 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0919 20:06:38.350454   48464 command_runner.go:130] > #
	I0919 20:06:38.350464   48464 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0919 20:06:38.350473   48464 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0919 20:06:38.350475   48464 command_runner.go:130] > #
	I0919 20:06:38.350481   48464 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0919 20:06:38.350486   48464 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0919 20:06:38.350489   48464 command_runner.go:130] > #
	I0919 20:06:38.350497   48464 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0919 20:06:38.350502   48464 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0919 20:06:38.350506   48464 command_runner.go:130] > # limitation.
	I0919 20:06:38.350512   48464 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0919 20:06:38.350516   48464 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0919 20:06:38.350520   48464 command_runner.go:130] > runtime_type = "oci"
	I0919 20:06:38.350527   48464 command_runner.go:130] > runtime_root = "/run/runc"
	I0919 20:06:38.350531   48464 command_runner.go:130] > runtime_config_path = ""
	I0919 20:06:38.350538   48464 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0919 20:06:38.350541   48464 command_runner.go:130] > monitor_cgroup = "pod"
	I0919 20:06:38.350545   48464 command_runner.go:130] > monitor_exec_cgroup = ""
	I0919 20:06:38.350549   48464 command_runner.go:130] > monitor_env = [
	I0919 20:06:38.350554   48464 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0919 20:06:38.350559   48464 command_runner.go:130] > ]
	I0919 20:06:38.350564   48464 command_runner.go:130] > privileged_without_host_devices = false
	I0919 20:06:38.350570   48464 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0919 20:06:38.350575   48464 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0919 20:06:38.350583   48464 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0919 20:06:38.350590   48464 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0919 20:06:38.350600   48464 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0919 20:06:38.350606   48464 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0919 20:06:38.350621   48464 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0919 20:06:38.350628   48464 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0919 20:06:38.350637   48464 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0919 20:06:38.350643   48464 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0919 20:06:38.350647   48464 command_runner.go:130] > # Example:
	I0919 20:06:38.350655   48464 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0919 20:06:38.350663   48464 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0919 20:06:38.350668   48464 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0919 20:06:38.350676   48464 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0919 20:06:38.350679   48464 command_runner.go:130] > # cpuset = 0
	I0919 20:06:38.350682   48464 command_runner.go:130] > # cpushares = "0-1"
	I0919 20:06:38.350686   48464 command_runner.go:130] > # Where:
	I0919 20:06:38.350690   48464 command_runner.go:130] > # The workload name is workload-type.
	I0919 20:06:38.350703   48464 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0919 20:06:38.350708   48464 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0919 20:06:38.350713   48464 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0919 20:06:38.350724   48464 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0919 20:06:38.350729   48464 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0919 20:06:38.350735   48464 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0919 20:06:38.350745   48464 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0919 20:06:38.350749   48464 command_runner.go:130] > # Default value is set to true
	I0919 20:06:38.350753   48464 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0919 20:06:38.350761   48464 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0919 20:06:38.350765   48464 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0919 20:06:38.350769   48464 command_runner.go:130] > # Default value is set to 'false'
	I0919 20:06:38.350773   48464 command_runner.go:130] > # disable_hostport_mapping = false
	I0919 20:06:38.350782   48464 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0919 20:06:38.350785   48464 command_runner.go:130] > #
	I0919 20:06:38.350790   48464 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0919 20:06:38.350802   48464 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0919 20:06:38.350808   48464 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0919 20:06:38.350816   48464 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0919 20:06:38.350829   48464 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0919 20:06:38.350833   48464 command_runner.go:130] > [crio.image]
	I0919 20:06:38.350842   48464 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0919 20:06:38.350854   48464 command_runner.go:130] > # default_transport = "docker://"
	I0919 20:06:38.350867   48464 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0919 20:06:38.350874   48464 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0919 20:06:38.350885   48464 command_runner.go:130] > # global_auth_file = ""
	I0919 20:06:38.350894   48464 command_runner.go:130] > # The image used to instantiate infra containers.
	I0919 20:06:38.350899   48464 command_runner.go:130] > # This option supports live configuration reload.
	I0919 20:06:38.350904   48464 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0919 20:06:38.350918   48464 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0919 20:06:38.350928   48464 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0919 20:06:38.350935   48464 command_runner.go:130] > # This option supports live configuration reload.
	I0919 20:06:38.350947   48464 command_runner.go:130] > # pause_image_auth_file = ""
	I0919 20:06:38.350956   48464 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0919 20:06:38.350966   48464 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0919 20:06:38.350980   48464 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0919 20:06:38.350989   48464 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0919 20:06:38.350994   48464 command_runner.go:130] > # pause_command = "/pause"
	I0919 20:06:38.351007   48464 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0919 20:06:38.351017   48464 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0919 20:06:38.351025   48464 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0919 20:06:38.351046   48464 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0919 20:06:38.351052   48464 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0919 20:06:38.351058   48464 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0919 20:06:38.351065   48464 command_runner.go:130] > # pinned_images = [
	I0919 20:06:38.351067   48464 command_runner.go:130] > # ]
	I0919 20:06:38.351073   48464 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0919 20:06:38.351092   48464 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0919 20:06:38.351101   48464 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0919 20:06:38.351107   48464 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0919 20:06:38.351115   48464 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0919 20:06:38.351118   48464 command_runner.go:130] > # signature_policy = ""
	I0919 20:06:38.351123   48464 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0919 20:06:38.351130   48464 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0919 20:06:38.351138   48464 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0919 20:06:38.351144   48464 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0919 20:06:38.351152   48464 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0919 20:06:38.351157   48464 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0919 20:06:38.351164   48464 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0919 20:06:38.351183   48464 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0919 20:06:38.351187   48464 command_runner.go:130] > # changing them here.
	I0919 20:06:38.351191   48464 command_runner.go:130] > # insecure_registries = [
	I0919 20:06:38.351194   48464 command_runner.go:130] > # ]
	I0919 20:06:38.351203   48464 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0919 20:06:38.351208   48464 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0919 20:06:38.351214   48464 command_runner.go:130] > # image_volumes = "mkdir"
	I0919 20:06:38.351223   48464 command_runner.go:130] > # Temporary directory to use for storing big files
	I0919 20:06:38.351227   48464 command_runner.go:130] > # big_files_temporary_dir = ""
	I0919 20:06:38.351236   48464 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0919 20:06:38.351239   48464 command_runner.go:130] > # CNI plugins.
	I0919 20:06:38.351243   48464 command_runner.go:130] > [crio.network]
	I0919 20:06:38.351251   48464 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0919 20:06:38.351256   48464 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0919 20:06:38.351260   48464 command_runner.go:130] > # cni_default_network = ""
	I0919 20:06:38.351265   48464 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0919 20:06:38.351272   48464 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0919 20:06:38.351277   48464 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0919 20:06:38.351280   48464 command_runner.go:130] > # plugin_dirs = [
	I0919 20:06:38.351284   48464 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0919 20:06:38.351287   48464 command_runner.go:130] > # ]
	I0919 20:06:38.351295   48464 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0919 20:06:38.351298   48464 command_runner.go:130] > [crio.metrics]
	I0919 20:06:38.351303   48464 command_runner.go:130] > # Globally enable or disable metrics support.
	I0919 20:06:38.351306   48464 command_runner.go:130] > enable_metrics = true
	I0919 20:06:38.351313   48464 command_runner.go:130] > # Specify enabled metrics collectors.
	I0919 20:06:38.351317   48464 command_runner.go:130] > # Per default all metrics are enabled.
	I0919 20:06:38.351323   48464 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0919 20:06:38.351332   48464 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0919 20:06:38.351343   48464 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0919 20:06:38.351348   48464 command_runner.go:130] > # metrics_collectors = [
	I0919 20:06:38.351351   48464 command_runner.go:130] > # 	"operations",
	I0919 20:06:38.351356   48464 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0919 20:06:38.351367   48464 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0919 20:06:38.351371   48464 command_runner.go:130] > # 	"operations_errors",
	I0919 20:06:38.351379   48464 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0919 20:06:38.351383   48464 command_runner.go:130] > # 	"image_pulls_by_name",
	I0919 20:06:38.351387   48464 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0919 20:06:38.351393   48464 command_runner.go:130] > # 	"image_pulls_failures",
	I0919 20:06:38.351397   48464 command_runner.go:130] > # 	"image_pulls_successes",
	I0919 20:06:38.351401   48464 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0919 20:06:38.351405   48464 command_runner.go:130] > # 	"image_layer_reuse",
	I0919 20:06:38.351410   48464 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0919 20:06:38.351419   48464 command_runner.go:130] > # 	"containers_oom_total",
	I0919 20:06:38.351422   48464 command_runner.go:130] > # 	"containers_oom",
	I0919 20:06:38.351426   48464 command_runner.go:130] > # 	"processes_defunct",
	I0919 20:06:38.351430   48464 command_runner.go:130] > # 	"operations_total",
	I0919 20:06:38.351434   48464 command_runner.go:130] > # 	"operations_latency_seconds",
	I0919 20:06:38.351441   48464 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0919 20:06:38.351445   48464 command_runner.go:130] > # 	"operations_errors_total",
	I0919 20:06:38.351450   48464 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0919 20:06:38.351454   48464 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0919 20:06:38.351458   48464 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0919 20:06:38.351465   48464 command_runner.go:130] > # 	"image_pulls_success_total",
	I0919 20:06:38.351469   48464 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0919 20:06:38.351473   48464 command_runner.go:130] > # 	"containers_oom_count_total",
	I0919 20:06:38.351477   48464 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0919 20:06:38.351484   48464 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0919 20:06:38.351487   48464 command_runner.go:130] > # ]
	I0919 20:06:38.351492   48464 command_runner.go:130] > # The port on which the metrics server will listen.
	I0919 20:06:38.351496   48464 command_runner.go:130] > # metrics_port = 9090
	I0919 20:06:38.351500   48464 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0919 20:06:38.351507   48464 command_runner.go:130] > # metrics_socket = ""
	I0919 20:06:38.351512   48464 command_runner.go:130] > # The certificate for the secure metrics server.
	I0919 20:06:38.351518   48464 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0919 20:06:38.351530   48464 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0919 20:06:38.351542   48464 command_runner.go:130] > # certificate on any modification event.
	I0919 20:06:38.351546   48464 command_runner.go:130] > # metrics_cert = ""
	I0919 20:06:38.351551   48464 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0919 20:06:38.351559   48464 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0919 20:06:38.351562   48464 command_runner.go:130] > # metrics_key = ""
	I0919 20:06:38.351568   48464 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0919 20:06:38.351571   48464 command_runner.go:130] > [crio.tracing]
	I0919 20:06:38.351579   48464 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0919 20:06:38.351583   48464 command_runner.go:130] > # enable_tracing = false
	I0919 20:06:38.351587   48464 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0919 20:06:38.351591   48464 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0919 20:06:38.351600   48464 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0919 20:06:38.351605   48464 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0919 20:06:38.351608   48464 command_runner.go:130] > # CRI-O NRI configuration.
	I0919 20:06:38.351612   48464 command_runner.go:130] > [crio.nri]
	I0919 20:06:38.351616   48464 command_runner.go:130] > # Globally enable or disable NRI.
	I0919 20:06:38.351622   48464 command_runner.go:130] > # enable_nri = false
	I0919 20:06:38.351626   48464 command_runner.go:130] > # NRI socket to listen on.
	I0919 20:06:38.351630   48464 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0919 20:06:38.351635   48464 command_runner.go:130] > # NRI plugin directory to use.
	I0919 20:06:38.351642   48464 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0919 20:06:38.351648   48464 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0919 20:06:38.351653   48464 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0919 20:06:38.351658   48464 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0919 20:06:38.351665   48464 command_runner.go:130] > # nri_disable_connections = false
	I0919 20:06:38.351670   48464 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0919 20:06:38.351674   48464 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0919 20:06:38.351680   48464 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0919 20:06:38.351686   48464 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0919 20:06:38.351692   48464 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0919 20:06:38.351695   48464 command_runner.go:130] > [crio.stats]
	I0919 20:06:38.351701   48464 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0919 20:06:38.351713   48464 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0919 20:06:38.351721   48464 command_runner.go:130] > # stats_collection_period = 0
	I0919 20:06:38.351748   48464 command_runner.go:130] ! time="2024-09-19 20:06:38.301176812Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0919 20:06:38.351763   48464 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0919 20:06:38.351852   48464 cni.go:84] Creating CNI manager for ""
	I0919 20:06:38.351859   48464 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 20:06:38.351881   48464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 20:06:38.351904   48464 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-282812 NodeName:multinode-282812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 20:06:38.352124   48464 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-282812"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 20:06:38.352194   48464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 20:06:38.362779   48464 command_runner.go:130] > kubeadm
	I0919 20:06:38.362796   48464 command_runner.go:130] > kubectl
	I0919 20:06:38.362802   48464 command_runner.go:130] > kubelet
	I0919 20:06:38.362833   48464 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 20:06:38.362883   48464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 20:06:38.373151   48464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0919 20:06:38.390286   48464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 20:06:38.407152   48464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0919 20:06:38.423759   48464 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0919 20:06:38.427552   48464 command_runner.go:130] > 192.168.39.87	control-plane.minikube.internal
	I0919 20:06:38.427622   48464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:06:38.569089   48464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 20:06:38.584494   48464 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812 for IP: 192.168.39.87
	I0919 20:06:38.584521   48464 certs.go:194] generating shared ca certs ...
	I0919 20:06:38.584543   48464 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 20:06:38.584720   48464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 20:06:38.584778   48464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 20:06:38.584793   48464 certs.go:256] generating profile certs ...
	I0919 20:06:38.584890   48464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/client.key
	I0919 20:06:38.584958   48464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/apiserver.key.ec5d7b66
	I0919 20:06:38.585014   48464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/proxy-client.key
	I0919 20:06:38.585025   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 20:06:38.585044   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 20:06:38.585058   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 20:06:38.585093   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 20:06:38.585111   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 20:06:38.585129   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 20:06:38.585146   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 20:06:38.585159   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 20:06:38.585209   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 20:06:38.585236   48464 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 20:06:38.585244   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 20:06:38.585266   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 20:06:38.585288   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 20:06:38.585309   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 20:06:38.585346   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 20:06:38.585372   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 20:06:38.585388   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:06:38.585406   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 20:06:38.586025   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 20:06:38.610081   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 20:06:38.633213   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 20:06:38.656601   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 20:06:38.680846   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 20:06:38.704331   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 20:06:38.753090   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 20:06:38.807615   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 20:06:38.859136   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 20:06:38.892564   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 20:06:38.921445   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 20:06:38.959906   48464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 20:06:38.988225   48464 ssh_runner.go:195] Run: openssl version
	I0919 20:06:39.001632   48464 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0919 20:06:39.002151   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 20:06:39.015700   48464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 20:06:39.030663   48464 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 20:06:39.030698   48464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 20:06:39.030751   48464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 20:06:39.038219   48464 command_runner.go:130] > 51391683
	I0919 20:06:39.038341   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 20:06:39.049239   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 20:06:39.060766   48464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 20:06:39.065318   48464 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 20:06:39.065354   48464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 20:06:39.065417   48464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 20:06:39.071115   48464 command_runner.go:130] > 3ec20f2e
	I0919 20:06:39.071173   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 20:06:39.081917   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 20:06:39.093022   48464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:06:39.097423   48464 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:06:39.097547   48464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:06:39.097620   48464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:06:39.103185   48464 command_runner.go:130] > b5213941
	I0919 20:06:39.103254   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 20:06:39.113141   48464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 20:06:39.117806   48464 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 20:06:39.117837   48464 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0919 20:06:39.117846   48464 command_runner.go:130] > Device: 253,1	Inode: 3148840     Links: 1
	I0919 20:06:39.117855   48464 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 20:06:39.117869   48464 command_runner.go:130] > Access: 2024-09-19 19:59:55.109042311 +0000
	I0919 20:06:39.117877   48464 command_runner.go:130] > Modify: 2024-09-19 19:59:55.109042311 +0000
	I0919 20:06:39.117887   48464 command_runner.go:130] > Change: 2024-09-19 19:59:55.109042311 +0000
	I0919 20:06:39.117897   48464 command_runner.go:130] >  Birth: 2024-09-19 19:59:55.109042311 +0000
	I0919 20:06:39.118035   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 20:06:39.123688   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.123868   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 20:06:39.129325   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.129490   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 20:06:39.135090   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.135165   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 20:06:39.140436   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.140599   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 20:06:39.146097   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.146244   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 20:06:39.151734   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.151801   48464 kubeadm.go:392] StartCluster: {Name:multinode-282812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-282812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:06:39.151924   48464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 20:06:39.151975   48464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 20:06:39.194077   48464 command_runner.go:130] > d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63
	I0919 20:06:39.194108   48464 command_runner.go:130] > b79f6dfa534789a6ecc5defa51edfc1de4dd7718b5ccb224413219ca33cfce07
	I0919 20:06:39.194114   48464 command_runner.go:130] > 8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89
	I0919 20:06:39.194146   48464 command_runner.go:130] > 45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b
	I0919 20:06:39.194152   48464 command_runner.go:130] > e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b
	I0919 20:06:39.194159   48464 command_runner.go:130] > fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f
	I0919 20:06:39.194168   48464 command_runner.go:130] > dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728
	I0919 20:06:39.194180   48464 command_runner.go:130] > 625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af
	I0919 20:06:39.194190   48464 command_runner.go:130] > 65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6
	I0919 20:06:39.194214   48464 cri.go:89] found id: "d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63"
	I0919 20:06:39.194225   48464 cri.go:89] found id: "b79f6dfa534789a6ecc5defa51edfc1de4dd7718b5ccb224413219ca33cfce07"
	I0919 20:06:39.194229   48464 cri.go:89] found id: "8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89"
	I0919 20:06:39.194232   48464 cri.go:89] found id: "45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b"
	I0919 20:06:39.194235   48464 cri.go:89] found id: "e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b"
	I0919 20:06:39.194238   48464 cri.go:89] found id: "fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f"
	I0919 20:06:39.194243   48464 cri.go:89] found id: "dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728"
	I0919 20:06:39.194246   48464 cri.go:89] found id: "625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af"
	I0919 20:06:39.194248   48464 cri.go:89] found id: "65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6"
	I0919 20:06:39.194255   48464 cri.go:89] found id: ""
	I0919 20:06:39.194306   48464 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.318829228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776507318809385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a7aafdc-9500-4e00-9c67-04b5a5e35c1d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.319277703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=398c3bb1-680b-4a24-8ca0-8a81f94568c0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.319328243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=398c3bb1-680b-4a24-8ca0-8a81f94568c0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.319699268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1d7bd2097da7969101364bded34cb941ec63d5c8d335186fe1c3e2f5ee653a,PodSandboxId:398153f70f0c640ecd20410e84e6ae1981468353b5d5324e3d740298ade9168a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726776435823242959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87309d2462fc4f7dfa4a9c5baf53f6a205cce9e51b2069bf554d905b50062ee6,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726776411307311389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdced286d5a6a15cfa4737af7cedc044f1b5f2176b096eb0c558979e58d05bdb,PodSandboxId:9fd4a69759df5b3764e69d2e95c8294bdd52c02e76c0409791d4dd20de44b5d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726776402374414164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b813
5-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91b2b6716ecb009c297b64b6e3a197b2b1ccfb373808d9960b1b97761172f09,PodSandboxId:8e9ee218230cf1f2e8fd6ddace0a167e8fbc169c31abdab80004d3273e8af707,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726776402538704736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},An
notations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3df28a477bc9b3a710219db447412f3bffc1d630456b14fc6bd107bbea44c55,PodSandboxId:224f00c2f20983646cdcd50553060fd16a1912e4b8adb12b7ffae222a15d50ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726776402385443300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85476d7e8d2b82a4dc3231d06dcca93f418d33c58c1a55f9da28344d912aac0a,PodSandboxId:e1f00caf995deb572d0e41e94b279b718b391caf87482f4e853c2e6685ed3f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726776402298290825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io.kubernetes.container.hash:
12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30431653e0e43ed529bb73220f39ab0fe58f2228aca51af2005a98e730ee5eca,PodSandboxId:8274e970fa8a4796eedb588ea33c1b8fcc0db0f9f1cd7bcd1a723893b17d126f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726776402253040111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69bde3b12f7d33021d4a5b784e9a8355feb38ad0f68cc72f6ce0e95f8090386d,PodSandboxId:d97cc1b9bfb7e3b6554a11e8a99779d72501c9ed2627a4f653afe5e63678f046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726776402202701308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.container.hash: 7
df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f0a92696c2dd48ea17d23a80293b334aafee2af059bc2b881cc64a2250c13a,PodSandboxId:3d7c4d3431ba405cc382d772533b7f690de776d4e0118efaa2c04205df266838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726776402095959832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726776398920591256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08cd67d64187e006994bc65839810a122496131699388a5379e209bf1e1b614,PodSandboxId:111fee9576f330deeea7b39a27ba3438989137455f74de336149bc60f6df7990,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726776081068852145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89,PodSandboxId:79a63ce099f45bd7977e3ae258f8d8cea024ad943b1d108eff7f159926dd7238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726776022932528561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: c56b8135-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b,PodSandboxId:d3fa20aed888f943be3030a650c6f710139632afbef3097461659d701298c3b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726776010913361485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b,PodSandboxId:c1a37209beb6fc5e334ca94bc59827bb5253e7859d94ad3dec33e37d856d5624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726776010850418703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31
-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f,PodSandboxId:f4592b7fce465589a0e1c51c95be50805f1129af964d1983dd06209bb65420bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726775998888676408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728,PodSandboxId:fe5f49b8d407d041f4cf9d974d854cde52e888e76ac3f66f5ed4cf54b1ca8111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775998866054027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af,PodSandboxId:7a92ed2f7d51e6a7e5b571faee81752a0b526c7420aa32e49252b63d2b7682aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726775998833399338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6,PodSandboxId:00405c53af3a27930cbdadb4a4ba8c44fd9334f2d2c6c21e4771f1de907b9c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775998804230790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=398c3bb1-680b-4a24-8ca0-8a81f94568c0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.361089118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a22be68c-09d0-49b5-815a-94dedf710b4a name=/runtime.v1.RuntimeService/Version
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.361291894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a22be68c-09d0-49b5-815a-94dedf710b4a name=/runtime.v1.RuntimeService/Version
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.362555653Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8de3de16-e439-4290-ab84-0e182ac18ee5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.362994507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776507362969549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8de3de16-e439-4290-ab84-0e182ac18ee5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.363689671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf9cf5c6-27ba-4446-aebe-1beaf97293a1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.363764790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf9cf5c6-27ba-4446-aebe-1beaf97293a1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.364155160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1d7bd2097da7969101364bded34cb941ec63d5c8d335186fe1c3e2f5ee653a,PodSandboxId:398153f70f0c640ecd20410e84e6ae1981468353b5d5324e3d740298ade9168a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726776435823242959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87309d2462fc4f7dfa4a9c5baf53f6a205cce9e51b2069bf554d905b50062ee6,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726776411307311389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdced286d5a6a15cfa4737af7cedc044f1b5f2176b096eb0c558979e58d05bdb,PodSandboxId:9fd4a69759df5b3764e69d2e95c8294bdd52c02e76c0409791d4dd20de44b5d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726776402374414164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b813
5-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91b2b6716ecb009c297b64b6e3a197b2b1ccfb373808d9960b1b97761172f09,PodSandboxId:8e9ee218230cf1f2e8fd6ddace0a167e8fbc169c31abdab80004d3273e8af707,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726776402538704736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},An
notations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3df28a477bc9b3a710219db447412f3bffc1d630456b14fc6bd107bbea44c55,PodSandboxId:224f00c2f20983646cdcd50553060fd16a1912e4b8adb12b7ffae222a15d50ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726776402385443300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85476d7e8d2b82a4dc3231d06dcca93f418d33c58c1a55f9da28344d912aac0a,PodSandboxId:e1f00caf995deb572d0e41e94b279b718b391caf87482f4e853c2e6685ed3f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726776402298290825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io.kubernetes.container.hash:
12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30431653e0e43ed529bb73220f39ab0fe58f2228aca51af2005a98e730ee5eca,PodSandboxId:8274e970fa8a4796eedb588ea33c1b8fcc0db0f9f1cd7bcd1a723893b17d126f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726776402253040111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69bde3b12f7d33021d4a5b784e9a8355feb38ad0f68cc72f6ce0e95f8090386d,PodSandboxId:d97cc1b9bfb7e3b6554a11e8a99779d72501c9ed2627a4f653afe5e63678f046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726776402202701308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.container.hash: 7
df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f0a92696c2dd48ea17d23a80293b334aafee2af059bc2b881cc64a2250c13a,PodSandboxId:3d7c4d3431ba405cc382d772533b7f690de776d4e0118efaa2c04205df266838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726776402095959832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726776398920591256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08cd67d64187e006994bc65839810a122496131699388a5379e209bf1e1b614,PodSandboxId:111fee9576f330deeea7b39a27ba3438989137455f74de336149bc60f6df7990,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726776081068852145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89,PodSandboxId:79a63ce099f45bd7977e3ae258f8d8cea024ad943b1d108eff7f159926dd7238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726776022932528561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: c56b8135-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b,PodSandboxId:d3fa20aed888f943be3030a650c6f710139632afbef3097461659d701298c3b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726776010913361485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b,PodSandboxId:c1a37209beb6fc5e334ca94bc59827bb5253e7859d94ad3dec33e37d856d5624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726776010850418703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31
-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f,PodSandboxId:f4592b7fce465589a0e1c51c95be50805f1129af964d1983dd06209bb65420bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726775998888676408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728,PodSandboxId:fe5f49b8d407d041f4cf9d974d854cde52e888e76ac3f66f5ed4cf54b1ca8111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775998866054027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af,PodSandboxId:7a92ed2f7d51e6a7e5b571faee81752a0b526c7420aa32e49252b63d2b7682aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726775998833399338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6,PodSandboxId:00405c53af3a27930cbdadb4a4ba8c44fd9334f2d2c6c21e4771f1de907b9c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775998804230790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf9cf5c6-27ba-4446-aebe-1beaf97293a1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.404801487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce3eaf97-5d26-4eea-9a75-3c92da88705e name=/runtime.v1.RuntimeService/Version
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.404879119Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce3eaf97-5d26-4eea-9a75-3c92da88705e name=/runtime.v1.RuntimeService/Version
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.406498423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6718ce34-0829-4628-8832-664234af7e4f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.407193268Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776507407169565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6718ce34-0829-4628-8832-664234af7e4f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.407899026Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d76a04e-bd6d-4b2f-8ce3-60b7a429e0a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.407961674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d76a04e-bd6d-4b2f-8ce3-60b7a429e0a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.408389186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1d7bd2097da7969101364bded34cb941ec63d5c8d335186fe1c3e2f5ee653a,PodSandboxId:398153f70f0c640ecd20410e84e6ae1981468353b5d5324e3d740298ade9168a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726776435823242959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87309d2462fc4f7dfa4a9c5baf53f6a205cce9e51b2069bf554d905b50062ee6,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726776411307311389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdced286d5a6a15cfa4737af7cedc044f1b5f2176b096eb0c558979e58d05bdb,PodSandboxId:9fd4a69759df5b3764e69d2e95c8294bdd52c02e76c0409791d4dd20de44b5d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726776402374414164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b813
5-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91b2b6716ecb009c297b64b6e3a197b2b1ccfb373808d9960b1b97761172f09,PodSandboxId:8e9ee218230cf1f2e8fd6ddace0a167e8fbc169c31abdab80004d3273e8af707,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726776402538704736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},An
notations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3df28a477bc9b3a710219db447412f3bffc1d630456b14fc6bd107bbea44c55,PodSandboxId:224f00c2f20983646cdcd50553060fd16a1912e4b8adb12b7ffae222a15d50ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726776402385443300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85476d7e8d2b82a4dc3231d06dcca93f418d33c58c1a55f9da28344d912aac0a,PodSandboxId:e1f00caf995deb572d0e41e94b279b718b391caf87482f4e853c2e6685ed3f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726776402298290825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io.kubernetes.container.hash:
12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30431653e0e43ed529bb73220f39ab0fe58f2228aca51af2005a98e730ee5eca,PodSandboxId:8274e970fa8a4796eedb588ea33c1b8fcc0db0f9f1cd7bcd1a723893b17d126f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726776402253040111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69bde3b12f7d33021d4a5b784e9a8355feb38ad0f68cc72f6ce0e95f8090386d,PodSandboxId:d97cc1b9bfb7e3b6554a11e8a99779d72501c9ed2627a4f653afe5e63678f046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726776402202701308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.container.hash: 7
df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f0a92696c2dd48ea17d23a80293b334aafee2af059bc2b881cc64a2250c13a,PodSandboxId:3d7c4d3431ba405cc382d772533b7f690de776d4e0118efaa2c04205df266838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726776402095959832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726776398920591256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08cd67d64187e006994bc65839810a122496131699388a5379e209bf1e1b614,PodSandboxId:111fee9576f330deeea7b39a27ba3438989137455f74de336149bc60f6df7990,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726776081068852145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89,PodSandboxId:79a63ce099f45bd7977e3ae258f8d8cea024ad943b1d108eff7f159926dd7238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726776022932528561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: c56b8135-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b,PodSandboxId:d3fa20aed888f943be3030a650c6f710139632afbef3097461659d701298c3b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726776010913361485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b,PodSandboxId:c1a37209beb6fc5e334ca94bc59827bb5253e7859d94ad3dec33e37d856d5624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726776010850418703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31
-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f,PodSandboxId:f4592b7fce465589a0e1c51c95be50805f1129af964d1983dd06209bb65420bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726775998888676408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728,PodSandboxId:fe5f49b8d407d041f4cf9d974d854cde52e888e76ac3f66f5ed4cf54b1ca8111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775998866054027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af,PodSandboxId:7a92ed2f7d51e6a7e5b571faee81752a0b526c7420aa32e49252b63d2b7682aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726775998833399338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6,PodSandboxId:00405c53af3a27930cbdadb4a4ba8c44fd9334f2d2c6c21e4771f1de907b9c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775998804230790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d76a04e-bd6d-4b2f-8ce3-60b7a429e0a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.452554927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49183128-d377-40bd-9072-1cc0d00d7fb2 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.452648801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49183128-d377-40bd-9072-1cc0d00d7fb2 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.454429996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb95b5a8-da86-4d3e-bdf6-0232f7371134 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.454973194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776507454947491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb95b5a8-da86-4d3e-bdf6-0232f7371134 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.455440759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5bf8b403-e468-4a7c-9186-f7382b6f71e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.455512058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5bf8b403-e468-4a7c-9186-f7382b6f71e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:08:27 multinode-282812 crio[2723]: time="2024-09-19 20:08:27.455871229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1d7bd2097da7969101364bded34cb941ec63d5c8d335186fe1c3e2f5ee653a,PodSandboxId:398153f70f0c640ecd20410e84e6ae1981468353b5d5324e3d740298ade9168a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726776435823242959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87309d2462fc4f7dfa4a9c5baf53f6a205cce9e51b2069bf554d905b50062ee6,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726776411307311389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdced286d5a6a15cfa4737af7cedc044f1b5f2176b096eb0c558979e58d05bdb,PodSandboxId:9fd4a69759df5b3764e69d2e95c8294bdd52c02e76c0409791d4dd20de44b5d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726776402374414164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b813
5-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91b2b6716ecb009c297b64b6e3a197b2b1ccfb373808d9960b1b97761172f09,PodSandboxId:8e9ee218230cf1f2e8fd6ddace0a167e8fbc169c31abdab80004d3273e8af707,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726776402538704736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},An
notations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3df28a477bc9b3a710219db447412f3bffc1d630456b14fc6bd107bbea44c55,PodSandboxId:224f00c2f20983646cdcd50553060fd16a1912e4b8adb12b7ffae222a15d50ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726776402385443300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85476d7e8d2b82a4dc3231d06dcca93f418d33c58c1a55f9da28344d912aac0a,PodSandboxId:e1f00caf995deb572d0e41e94b279b718b391caf87482f4e853c2e6685ed3f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726776402298290825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io.kubernetes.container.hash:
12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30431653e0e43ed529bb73220f39ab0fe58f2228aca51af2005a98e730ee5eca,PodSandboxId:8274e970fa8a4796eedb588ea33c1b8fcc0db0f9f1cd7bcd1a723893b17d126f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726776402253040111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69bde3b12f7d33021d4a5b784e9a8355feb38ad0f68cc72f6ce0e95f8090386d,PodSandboxId:d97cc1b9bfb7e3b6554a11e8a99779d72501c9ed2627a4f653afe5e63678f046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726776402202701308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.container.hash: 7
df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f0a92696c2dd48ea17d23a80293b334aafee2af059bc2b881cc64a2250c13a,PodSandboxId:3d7c4d3431ba405cc382d772533b7f690de776d4e0118efaa2c04205df266838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726776402095959832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726776398920591256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08cd67d64187e006994bc65839810a122496131699388a5379e209bf1e1b614,PodSandboxId:111fee9576f330deeea7b39a27ba3438989137455f74de336149bc60f6df7990,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726776081068852145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89,PodSandboxId:79a63ce099f45bd7977e3ae258f8d8cea024ad943b1d108eff7f159926dd7238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726776022932528561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: c56b8135-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b,PodSandboxId:d3fa20aed888f943be3030a650c6f710139632afbef3097461659d701298c3b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726776010913361485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b,PodSandboxId:c1a37209beb6fc5e334ca94bc59827bb5253e7859d94ad3dec33e37d856d5624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726776010850418703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31
-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f,PodSandboxId:f4592b7fce465589a0e1c51c95be50805f1129af964d1983dd06209bb65420bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726775998888676408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728,PodSandboxId:fe5f49b8d407d041f4cf9d974d854cde52e888e76ac3f66f5ed4cf54b1ca8111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775998866054027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af,PodSandboxId:7a92ed2f7d51e6a7e5b571faee81752a0b526c7420aa32e49252b63d2b7682aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726775998833399338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6,PodSandboxId:00405c53af3a27930cbdadb4a4ba8c44fd9334f2d2c6c21e4771f1de907b9c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775998804230790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5bf8b403-e468-4a7c-9186-f7382b6f71e0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bb1d7bd2097da       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   398153f70f0c6       busybox-7dff88458-mmwbs
	87309d2462fc4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   2                   96b8f5b47395d       coredns-7c65d6cfc9-7p947
	d91b2b6716ecb       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   8e9ee218230cf       kindnet-z66g5
	b3df28a477bc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   224f00c2f2098       etcd-multinode-282812
	fdced286d5a6a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   9fd4a69759df5       storage-provisioner
	85476d7e8d2b8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   e1f00caf995de       kube-scheduler-multinode-282812
	30431653e0e43       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   8274e970fa8a4       kube-controller-manager-multinode-282812
	69bde3b12f7d3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   d97cc1b9bfb7e       kube-apiserver-multinode-282812
	15f0a92696c2d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   3d7c4d3431ba4       kube-proxy-gckr9
	d1a67d9740309       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Exited              coredns                   1                   96b8f5b47395d       coredns-7c65d6cfc9-7p947
	f08cd67d64187       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   111fee9576f33       busybox-7dff88458-mmwbs
	8a226c55e3f79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   79a63ce099f45       storage-provisioner
	45527d61634e0       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   d3fa20aed888f       kindnet-z66g5
	e4f064262cf36       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   c1a37209beb6f       kube-proxy-gckr9
	fb7cd7e02ae6b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   f4592b7fce465       etcd-multinode-282812
	dc3ea0d6f2bb7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   fe5f49b8d407d       kube-controller-manager-multinode-282812
	625d2fcd75cad       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   7a92ed2f7d51e       kube-scheduler-multinode-282812
	65a25f681cf69       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   00405c53af3a2       kube-apiserver-multinode-282812
	
	
	==> coredns [87309d2462fc4f7dfa4a9c5baf53f6a205cce9e51b2069bf554d905b50062ee6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57245 - 58410 "HINFO IN 71375068057553640.5908403203819485535. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.014069259s
	
	
	==> coredns [d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:48969 - 15932 "HINFO IN 6371373735206316795.7861135580671048157. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018746069s
	
	
	==> describe nodes <==
	Name:               multinode-282812
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-282812
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=multinode-282812
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T20_00_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 20:00:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-282812
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 20:08:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 20:06:51 +0000   Thu, 19 Sep 2024 19:59:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 20:06:51 +0000   Thu, 19 Sep 2024 19:59:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 20:06:51 +0000   Thu, 19 Sep 2024 19:59:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 20:06:51 +0000   Thu, 19 Sep 2024 20:00:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    multinode-282812
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5cf43698a42a4ce48b0c060c07aadae3
	  System UUID:                5cf43698-a42a-4ce4-8b0c-060c07aadae3
	  Boot ID:                    853f9e82-c4a8-4f86-acd0-9c089477abdb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mmwbs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 coredns-7c65d6cfc9-7p947                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m18s
	  kube-system                 etcd-multinode-282812                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m23s
	  kube-system                 kindnet-z66g5                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m18s
	  kube-system                 kube-apiserver-multinode-282812             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-controller-manager-multinode-282812    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-proxy-gckr9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-scheduler-multinode-282812             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 101s                  kube-proxy       
	  Normal   Starting                 8m16s                 kube-proxy       
	  Normal   Starting                 8m23s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8m23s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m23s                 kubelet          Node multinode-282812 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m23s                 kubelet          Node multinode-282812 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m23s                 kubelet          Node multinode-282812 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m19s                 node-controller  Node multinode-282812 event: Registered Node multinode-282812 in Controller
	  Normal   NodeReady                8m5s                  kubelet          Node multinode-282812 status is now: NodeReady
	  Warning  ContainerGCFailed        2m23s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             115s (x6 over 2m46s)  kubelet          Node multinode-282812 status is now: NodeNotReady
	  Normal   RegisteredNode           98s                   node-controller  Node multinode-282812 event: Registered Node multinode-282812 in Controller
	  Normal   Starting                 97s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  97s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  96s                   kubelet          Node multinode-282812 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s                   kubelet          Node multinode-282812 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     96s                   kubelet          Node multinode-282812 status is now: NodeHasSufficientPID
	
	
	Name:               multinode-282812-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-282812-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=multinode-282812
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T20_07_27_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 20:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-282812-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 20:08:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 20:07:57 +0000   Thu, 19 Sep 2024 20:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 20:07:57 +0000   Thu, 19 Sep 2024 20:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 20:07:57 +0000   Thu, 19 Sep 2024 20:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 20:07:57 +0000   Thu, 19 Sep 2024 20:07:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    multinode-282812-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 62c948409d25441e8e056ca589512803
	  System UUID:                62c94840-9d25-441e-8e05-6ca589512803
	  Boot ID:                    bd57a503-e00a-4e1d-b9cf-b0757a95652e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l8hqk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kindnet-stjkn              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m33s
	  kube-system                 kube-proxy-pbj4d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m27s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m33s (x2 over 7m33s)  kubelet     Node multinode-282812-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m33s (x2 over 7m33s)  kubelet     Node multinode-282812-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m33s (x2 over 7m33s)  kubelet     Node multinode-282812-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m12s                  kubelet     Node multinode-282812-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet     Node multinode-282812-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet     Node multinode-282812-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet     Node multinode-282812-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-282812-m02 status is now: NodeReady
	
	
	Name:               multinode-282812-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-282812-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=multinode-282812
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T20_08_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 20:08:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-282812-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 20:08:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 20:08:24 +0000   Thu, 19 Sep 2024 20:08:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 20:08:24 +0000   Thu, 19 Sep 2024 20:08:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 20:08:24 +0000   Thu, 19 Sep 2024 20:08:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 20:08:24 +0000   Thu, 19 Sep 2024 20:08:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    multinode-282812-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c090e621e14b4d3aa10f88a53d558e5e
	  System UUID:                c090e621-e14b-4d3a-a10f-88a53d558e5e
	  Boot ID:                    f1d41664-5da7-475d-9a3f-01204abec726
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jrlhz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m36s
	  kube-system                 kube-proxy-c4mtw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m30s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m42s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m36s (x2 over 6m36s)  kubelet     Node multinode-282812-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x2 over 6m36s)  kubelet     Node multinode-282812-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x2 over 6m36s)  kubelet     Node multinode-282812-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m15s                  kubelet     Node multinode-282812-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet     Node multinode-282812-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet     Node multinode-282812-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet     Node multinode-282812-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m46s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m27s                  kubelet     Node multinode-282812-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-282812-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-282812-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-282812-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-282812-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058293] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.177391] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.127970] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.254722] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.838140] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +4.070663] systemd-fstab-generator[870]: Ignoring "noauto" option for root device
	[  +0.056166] kauditd_printk_skb: 158 callbacks suppressed
	[Sep19 20:00] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.091101] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.179185] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.110556] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.293977] kauditd_printk_skb: 69 callbacks suppressed
	[Sep19 20:01] kauditd_printk_skb: 12 callbacks suppressed
	[Sep19 20:06] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.140037] systemd-fstab-generator[2659]: Ignoring "noauto" option for root device
	[  +0.171293] systemd-fstab-generator[2673]: Ignoring "noauto" option for root device
	[  +0.137697] systemd-fstab-generator[2685]: Ignoring "noauto" option for root device
	[  +0.277525] systemd-fstab-generator[2713]: Ignoring "noauto" option for root device
	[  +0.672603] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +3.712910] kauditd_printk_skb: 152 callbacks suppressed
	[  +7.124274] kauditd_printk_skb: 42 callbacks suppressed
	[  +1.233395] systemd-fstab-generator[3686]: Ignoring "noauto" option for root device
	[  +4.111486] kauditd_printk_skb: 21 callbacks suppressed
	[Sep19 20:07] systemd-fstab-generator[3859]: Ignoring "noauto" option for root device
	[ +13.196838] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [b3df28a477bc9b3a710219db447412f3bffc1d630456b14fc6bd107bbea44c55] <==
	{"level":"info","ts":"2024-09-19T20:06:43.141841Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-19T20:06:43.141906Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-19T20:06:43.141917Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-19T20:06:43.142612Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T20:06:43.144811Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-09-19T20:06:43.144840Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-09-19T20:06:43.144755Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-19T20:06:43.146025Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aad771494ea7416a","initial-advertise-peer-urls":["https://192.168.39.87:2380"],"listen-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.87:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-19T20:06:43.146142Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-19T20:06:44.414562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-19T20:06:44.414680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-19T20:06:44.414729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgPreVoteResp from aad771494ea7416a at term 2"}
	{"level":"info","ts":"2024-09-19T20:06:44.414777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became candidate at term 3"}
	{"level":"info","ts":"2024-09-19T20:06:44.414801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgVoteResp from aad771494ea7416a at term 3"}
	{"level":"info","ts":"2024-09-19T20:06:44.414827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became leader at term 3"}
	{"level":"info","ts":"2024-09-19T20:06:44.414853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aad771494ea7416a elected leader aad771494ea7416a at term 3"}
	{"level":"info","ts":"2024-09-19T20:06:44.417528Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aad771494ea7416a","local-member-attributes":"{Name:multinode-282812 ClientURLs:[https://192.168.39.87:2379]}","request-path":"/0/members/aad771494ea7416a/attributes","cluster-id":"8794d44e1d88e05d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T20:06:44.417618Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T20:06:44.417723Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T20:06:44.417766Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T20:06:44.417784Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T20:06:44.418739Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T20:06:44.418830Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T20:06:44.419601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T20:06:44.419742Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.87:2379"}
	
	
	==> etcd [fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f] <==
	{"level":"info","ts":"2024-09-19T20:00:00.496336Z","caller":"traceutil/trace.go:171","msg":"trace[1524938224] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:1; }","duration":"141.141279ms","start":"2024-09-19T20:00:00.355179Z","end":"2024-09-19T20:00:00.496320Z","steps":["trace[1524938224] 'count revisions from in-memory index tree'  (duration: 140.970388ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:00:00.496451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.142189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-09-19T20:00:00.496511Z","caller":"traceutil/trace.go:171","msg":"trace[1779907709] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:1; }","duration":"141.209311ms","start":"2024-09-19T20:00:00.355296Z","end":"2024-09-19T20:00:00.496505Z","steps":["trace[1779907709] 'range keys from in-memory index tree'  (duration: 141.116488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:00:54.828847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.964421ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4713740539675766913 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-282812-m02.17f6bdac3c539488\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-282812-m02.17f6bdac3c539488\" value_size:646 lease:4713740539675766328 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T20:00:54.828938Z","caller":"traceutil/trace.go:171","msg":"trace[1876267680] linearizableReadLoop","detail":"{readStateIndex:457; appliedIndex:456; }","duration":"158.751071ms","start":"2024-09-19T20:00:54.670168Z","end":"2024-09-19T20:00:54.828919Z","steps":["trace[1876267680] 'read index received'  (duration: 20.996µs)","trace[1876267680] 'applied index is now lower than readState.Index'  (duration: 158.729109ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T20:00:54.829002Z","caller":"traceutil/trace.go:171","msg":"trace[622685273] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"235.686418ms","start":"2024-09-19T20:00:54.593307Z","end":"2024-09-19T20:00:54.828994Z","steps":["trace[622685273] 'process raft request'  (duration: 75.921905ms)","trace[622685273] 'compare'  (duration: 158.831945ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T20:00:54.829295Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.106441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-282812-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T20:00:54.829337Z","caller":"traceutil/trace.go:171","msg":"trace[790832056] range","detail":"{range_begin:/registry/minions/multinode-282812-m02; range_end:; response_count:0; response_revision:440; }","duration":"159.16534ms","start":"2024-09-19T20:00:54.670164Z","end":"2024-09-19T20:00:54.829329Z","steps":["trace[790832056] 'agreement among raft nodes before linearized reading'  (duration: 159.09047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:01:51.453686Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.055733ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4713740539675767432 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-282812-m03.17f6bdb96b6b3f3b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-282812-m03.17f6bdb96b6b3f3b\" value_size:646 lease:4713740539675767037 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T20:01:51.453928Z","caller":"traceutil/trace.go:171","msg":"trace[1864109658] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:605; }","duration":"167.532796ms","start":"2024-09-19T20:01:51.286378Z","end":"2024-09-19T20:01:51.453911Z","steps":["trace[1864109658] 'read index received'  (duration: 40.117116ms)","trace[1864109658] 'applied index is now lower than readState.Index'  (duration: 127.415002ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T20:01:51.454033Z","caller":"traceutil/trace.go:171","msg":"trace[943548277] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"219.564162ms","start":"2024-09-19T20:01:51.234454Z","end":"2024-09-19T20:01:51.454018Z","steps":["trace[943548277] 'process raft request'  (duration: 92.083344ms)","trace[943548277] 'compare'  (duration: 126.956601ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T20:01:51.454459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.866099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-282812-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T20:01:51.454769Z","caller":"traceutil/trace.go:171","msg":"trace[441130642] range","detail":"{range_begin:/registry/minions/multinode-282812-m03; range_end:; response_count:0; response_revision:576; }","duration":"168.336367ms","start":"2024-09-19T20:01:51.286374Z","end":"2024-09-19T20:01:51.454711Z","steps":["trace[441130642] 'agreement among raft nodes before linearized reading'  (duration: 167.645982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:01:51.455513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.529244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-19T20:01:51.455645Z","caller":"traceutil/trace.go:171","msg":"trace[129924880] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:576; }","duration":"128.704509ms","start":"2024-09-19T20:01:51.326916Z","end":"2024-09-19T20:01:51.455620Z","steps":["trace[129924880] 'agreement among raft nodes before linearized reading'  (duration: 128.329092ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T20:05:05.914802Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-19T20:05:05.914938Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-282812","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"]}
	{"level":"warn","ts":"2024-09-19T20:05:05.915081Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:05:05.915258Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:05:05.958018Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.87:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:05:05.958088Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.87:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-19T20:05:05.960630Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aad771494ea7416a","current-leader-member-id":"aad771494ea7416a"}
	{"level":"info","ts":"2024-09-19T20:05:05.965578Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-09-19T20:05:05.965757Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-09-19T20:05:05.965798Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-282812","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"]}
	
	
	==> kernel <==
	 20:08:27 up 8 min,  0 users,  load average: 0.23, 0.23, 0.13
	Linux multinode-282812 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b] <==
	I0919 20:04:21.879825       1 main.go:299] handling current node
	I0919 20:04:31.879271       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:04:31.879451       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:04:31.879659       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:04:31.879685       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.5.0/24] 
	I0919 20:04:31.879748       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:04:31.879766       1 main.go:299] handling current node
	I0919 20:04:41.870816       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:04:41.871036       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:04:41.871258       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:04:41.871288       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.5.0/24] 
	I0919 20:04:41.871405       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:04:41.871426       1 main.go:299] handling current node
	I0919 20:04:51.879978       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:04:51.880077       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.5.0/24] 
	I0919 20:04:51.880284       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:04:51.880314       1 main.go:299] handling current node
	I0919 20:04:51.880338       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:04:51.880353       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:05:01.876436       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:05:01.876484       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.5.0/24] 
	I0919 20:05:01.876616       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:05:01.876736       1 main.go:299] handling current node
	I0919 20:05:01.876880       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:05:01.877046       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [d91b2b6716ecb009c297b64b6e3a197b2b1ccfb373808d9960b1b97761172f09] <==
	I0919 20:07:53.540624       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:07:53.540739       1 main.go:299] handling current node
	I0919 20:07:53.540772       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:07:53.540791       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:07:53.540922       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:07:53.540975       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.5.0/24] 
	I0919 20:08:03.542480       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:08:03.542626       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.5.0/24] 
	I0919 20:08:03.542781       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:08:03.542830       1 main.go:299] handling current node
	I0919 20:08:03.542861       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:08:03.542879       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:08:13.537678       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:08:13.537738       1 main.go:299] handling current node
	I0919 20:08:13.537757       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:08:13.537765       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:08:13.537969       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:08:13.537995       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.2.0/24] 
	I0919 20:08:13.538054       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.238 Flags: [] Table: 0} 
	I0919 20:08:23.544281       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:08:23.544345       1 main.go:299] handling current node
	I0919 20:08:23.544370       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:08:23.544375       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:08:23.544552       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:08:23.544578       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6] <==
	I0919 20:00:02.600799       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0919 20:00:02.600838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 20:00:03.363896       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 20:00:03.408494       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 20:00:03.506298       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0919 20:00:03.513052       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.87]
	I0919 20:00:03.514004       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 20:00:03.518010       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 20:00:03.798332       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0919 20:00:04.552402       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0919 20:00:04.567800       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 20:00:04.579878       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 20:00:09.334588       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0919 20:00:09.499206       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0919 20:01:22.206050       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41092: use of closed network connection
	E0919 20:01:22.382465       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41108: use of closed network connection
	E0919 20:01:22.557863       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41118: use of closed network connection
	E0919 20:01:22.733398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41132: use of closed network connection
	E0919 20:01:22.894715       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41142: use of closed network connection
	E0919 20:01:23.056367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41162: use of closed network connection
	E0919 20:01:23.323830       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41182: use of closed network connection
	E0919 20:01:23.493609       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41202: use of closed network connection
	E0919 20:01:23.658689       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41232: use of closed network connection
	E0919 20:01:23.828833       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41246: use of closed network connection
	I0919 20:05:05.918255       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [69bde3b12f7d33021d4a5b784e9a8355feb38ad0f68cc72f6ce0e95f8090386d] <==
	I0919 20:06:45.776765       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0919 20:06:45.777871       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0919 20:06:45.784186       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0919 20:06:45.784292       1 aggregator.go:171] initial CRD sync complete...
	I0919 20:06:45.784322       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 20:06:45.784345       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 20:06:45.784367       1 cache.go:39] Caches are synced for autoregister controller
	I0919 20:06:45.792190       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0919 20:06:45.792249       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0919 20:06:45.793566       1 shared_informer.go:320] Caches are synced for configmaps
	I0919 20:06:45.793638       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 20:06:45.793669       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0919 20:06:45.793657       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 20:06:45.796363       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 20:06:45.796453       1 policy_source.go:224] refreshing policies
	I0919 20:06:45.800678       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0919 20:06:45.860543       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 20:06:46.666354       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 20:06:49.155327       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 20:06:49.261074       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 20:06:51.482852       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 20:06:51.611619       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0919 20:06:51.633518       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0919 20:06:51.714667       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 20:06:51.720298       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [30431653e0e43ed529bb73220f39ab0fe58f2228aca51af2005a98e730ee5eca] <==
	I0919 20:07:46.715841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:07:46.731265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:07:46.734065       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.461µs"
	I0919 20:07:46.749227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.52µs"
	I0919 20:07:49.231828       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:07:50.413301       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.619878ms"
	I0919 20:07:50.413397       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.581µs"
	I0919 20:07:57.602520       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:08:04.505928       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:04.524684       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:04.760989       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:08:04.761189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:05.955728       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:08:05.957255       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282812-m03\" does not exist"
	I0919 20:08:05.976002       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282812-m03" podCIDRs=["10.244.2.0/24"]
	I0919 20:08:05.976045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:05.976076       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:05.976365       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:06.285070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:06.625731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:09.339967       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:16.185517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:24.641637       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:08:24.641711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:24.653514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	
	
	==> kube-controller-manager [dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728] <==
	I0919 20:02:40.314473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:40.544592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:40.544738       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:02:41.567886       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:02:41.568810       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282812-m03\" does not exist"
	I0919 20:02:41.590291       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282812-m03" podCIDRs=["10.244.5.0/24"]
	I0919 20:02:41.590394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:41.590442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:41.599725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:41.913206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:43.603485       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:51.859243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:00.415753       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:03:00.416221       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:00.427514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:03.594151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:38.610651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:03:38.610930       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m03"
	I0919 20:03:38.627877       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:03:38.665346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.40248ms"
	I0919 20:03:38.666339       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.857µs"
	I0919 20:03:43.667024       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:43.691649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:43.722289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:03:53.803569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	
	
	==> kube-proxy [15f0a92696c2dd48ea17d23a80293b334aafee2af059bc2b881cc64a2250c13a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 20:06:43.375354       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 20:06:45.761784       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.87"]
	E0919 20:06:45.761874       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 20:06:45.830218       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 20:06:45.830271       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 20:06:45.830296       1 server_linux.go:169] "Using iptables Proxier"
	I0919 20:06:45.832785       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 20:06:45.833138       1 server.go:483] "Version info" version="v1.31.1"
	I0919 20:06:45.833191       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:06:45.834941       1 config.go:199] "Starting service config controller"
	I0919 20:06:45.834993       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 20:06:45.835039       1 config.go:105] "Starting endpoint slice config controller"
	I0919 20:06:45.835058       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 20:06:45.835683       1 config.go:328] "Starting node config controller"
	I0919 20:06:45.835717       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 20:06:45.935302       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 20:06:45.935485       1 shared_informer.go:320] Caches are synced for service config
	I0919 20:06:45.937198       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 20:00:11.052505       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 20:00:11.062454       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.87"]
	E0919 20:00:11.062662       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 20:00:11.130244       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 20:00:11.130282       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 20:00:11.130304       1 server_linux.go:169] "Using iptables Proxier"
	I0919 20:00:11.133161       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 20:00:11.133461       1 server.go:483] "Version info" version="v1.31.1"
	I0919 20:00:11.133612       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:00:11.135205       1 config.go:199] "Starting service config controller"
	I0919 20:00:11.135255       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 20:00:11.135298       1 config.go:105] "Starting endpoint slice config controller"
	I0919 20:00:11.135314       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 20:00:11.135838       1 config.go:328] "Starting node config controller"
	I0919 20:00:11.135911       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 20:00:11.235444       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 20:00:11.235480       1 shared_informer.go:320] Caches are synced for service config
	I0919 20:00:11.236073       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af] <==
	E0919 20:00:01.790894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:01.790962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 20:00:01.790996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.603273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 20:00:02.603386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.651380       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 20:00:02.651530       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0919 20:00:02.673619       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 20:00:02.673667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.674430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 20:00:02.674535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.799592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 20:00:02.799707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.855700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 20:00:02.855917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.860835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 20:00:02.860887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.924664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 20:00:02.924714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.982482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 20:00:02.982586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.995604       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 20:00:02.995664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0919 20:00:05.254301       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 20:05:05.920372       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [85476d7e8d2b82a4dc3231d06dcca93f418d33c58c1a55f9da28344d912aac0a] <==
	I0919 20:06:43.585665       1 serving.go:386] Generated self-signed cert in-memory
	W0919 20:06:45.737720       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 20:06:45.737808       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 20:06:45.737836       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 20:06:45.737866       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 20:06:45.762237       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0919 20:06:45.762363       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:06:45.764958       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 20:06:45.765263       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:06:45.765997       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 20:06:45.768076       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 20:06:45.866860       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 20:06:53 multinode-282812 kubelet[3693]: I0919 20:06:53.008511    3693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 20:06:54 multinode-282812 kubelet[3693]: I0919 20:06:54.597653    3693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 20:07:00 multinode-282812 kubelet[3693]: E0919 20:07:00.927256    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776420926881156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:00 multinode-282812 kubelet[3693]: E0919 20:07:00.928020    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776420926881156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:10 multinode-282812 kubelet[3693]: E0919 20:07:10.931052    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776430930579202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:10 multinode-282812 kubelet[3693]: E0919 20:07:10.931592    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776430930579202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:20 multinode-282812 kubelet[3693]: E0919 20:07:20.934863    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776440933935901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:20 multinode-282812 kubelet[3693]: E0919 20:07:20.935454    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776440933935901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:30 multinode-282812 kubelet[3693]: E0919 20:07:30.938284    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776450937548546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:30 multinode-282812 kubelet[3693]: E0919 20:07:30.940749    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776450937548546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:40 multinode-282812 kubelet[3693]: E0919 20:07:40.943186    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776460941919891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:40 multinode-282812 kubelet[3693]: E0919 20:07:40.943237    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776460941919891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:50 multinode-282812 kubelet[3693]: E0919 20:07:50.846016    3693 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 19 20:07:50 multinode-282812 kubelet[3693]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 20:07:50 multinode-282812 kubelet[3693]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 20:07:50 multinode-282812 kubelet[3693]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 20:07:50 multinode-282812 kubelet[3693]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 20:07:50 multinode-282812 kubelet[3693]: E0919 20:07:50.951169    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776470946390277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:07:50 multinode-282812 kubelet[3693]: E0919 20:07:50.951221    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776470946390277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:08:00 multinode-282812 kubelet[3693]: E0919 20:08:00.953383    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776480952438179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:08:00 multinode-282812 kubelet[3693]: E0919 20:08:00.953413    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776480952438179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:08:10 multinode-282812 kubelet[3693]: E0919 20:08:10.955440    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776490954731856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:08:10 multinode-282812 kubelet[3693]: E0919 20:08:10.956023    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776490954731856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:08:20 multinode-282812 kubelet[3693]: E0919 20:08:20.957695    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776500957030810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:08:20 multinode-282812 kubelet[3693]: E0919 20:08:20.958403    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776500957030810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 20:08:27.042778   49566 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19664-7917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-282812 -n multinode-282812
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-282812 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (325.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 stop
E0919 20:08:59.337619   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-282812 stop: exit status 82 (2m0.461395463s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-282812-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-282812 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-282812 status: (18.851368554s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-282812 status --alsologtostderr: (3.35918143s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-282812 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-282812 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-282812 -n multinode-282812
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-282812 logs -n 25: (1.423407789s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m02:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812:/home/docker/cp-test_multinode-282812-m02_multinode-282812.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n multinode-282812 sudo cat                                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /home/docker/cp-test_multinode-282812-m02_multinode-282812.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m02:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03:/home/docker/cp-test_multinode-282812-m02_multinode-282812-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n multinode-282812-m03 sudo cat                                   | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /home/docker/cp-test_multinode-282812-m02_multinode-282812-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp testdata/cp-test.txt                                                | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m03:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile472680244/001/cp-test_multinode-282812-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m03:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812:/home/docker/cp-test_multinode-282812-m03_multinode-282812.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n multinode-282812 sudo cat                                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /home/docker/cp-test_multinode-282812-m03_multinode-282812.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m03:/home/docker/cp-test.txt                       | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m02:/home/docker/cp-test_multinode-282812-m03_multinode-282812-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n multinode-282812-m02 sudo cat                                   | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /home/docker/cp-test_multinode-282812-m03_multinode-282812-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-282812 node stop m03                                                          | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	| node    | multinode-282812 node start                                                             | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:03 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-282812                                                                | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:03 UTC |                     |
	| stop    | -p multinode-282812                                                                     | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:03 UTC |                     |
	| start   | -p multinode-282812                                                                     | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:05 UTC | 19 Sep 24 20:08 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-282812                                                                | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:08 UTC |                     |
	| node    | multinode-282812 node delete                                                            | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:08 UTC | 19 Sep 24 20:08 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-282812 stop                                                                   | multinode-282812 | jenkins | v1.34.0 | 19 Sep 24 20:08 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 20:05:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 20:05:04.933210   48464 out.go:345] Setting OutFile to fd 1 ...
	I0919 20:05:04.933427   48464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:05:04.933465   48464 out.go:358] Setting ErrFile to fd 2...
	I0919 20:05:04.933481   48464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:05:04.934115   48464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 20:05:04.934706   48464 out.go:352] Setting JSON to false
	I0919 20:05:04.935691   48464 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6449,"bootTime":1726769856,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 20:05:04.935801   48464 start.go:139] virtualization: kvm guest
	I0919 20:05:04.938508   48464 out.go:177] * [multinode-282812] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 20:05:04.940089   48464 notify.go:220] Checking for updates...
	I0919 20:05:04.940140   48464 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 20:05:04.941790   48464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 20:05:04.943316   48464 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 20:05:04.944872   48464 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 20:05:04.946297   48464 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 20:05:04.947713   48464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 20:05:04.949573   48464 config.go:182] Loaded profile config "multinode-282812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 20:05:04.949666   48464 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 20:05:04.950132   48464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:05:04.950191   48464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:05:04.965413   48464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I0919 20:05:04.965928   48464 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:05:04.966450   48464 main.go:141] libmachine: Using API Version  1
	I0919 20:05:04.966470   48464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:05:04.966772   48464 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:05:04.966942   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:05:05.003385   48464 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 20:05:05.004923   48464 start.go:297] selected driver: kvm2
	I0919 20:05:05.004934   48464 start.go:901] validating driver "kvm2" against &{Name:multinode-282812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-282812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:05:05.005105   48464 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 20:05:05.005427   48464 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 20:05:05.005487   48464 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 20:05:05.020327   48464 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 20:05:05.020986   48464 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 20:05:05.021019   48464 cni.go:84] Creating CNI manager for ""
	I0919 20:05:05.021105   48464 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 20:05:05.021242   48464 start.go:340] cluster config:
	{Name:multinode-282812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-282812 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:05:05.021392   48464 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 20:05:05.024202   48464 out.go:177] * Starting "multinode-282812" primary control-plane node in "multinode-282812" cluster
	I0919 20:05:05.025617   48464 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 20:05:05.025671   48464 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 20:05:05.025679   48464 cache.go:56] Caching tarball of preloaded images
	I0919 20:05:05.025789   48464 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 20:05:05.025804   48464 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 20:05:05.025915   48464 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/config.json ...
	I0919 20:05:05.026145   48464 start.go:360] acquireMachinesLock for multinode-282812: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 20:05:05.026224   48464 start.go:364] duration metric: took 59.676µs to acquireMachinesLock for "multinode-282812"
	I0919 20:05:05.026243   48464 start.go:96] Skipping create...Using existing machine configuration
	I0919 20:05:05.026250   48464 fix.go:54] fixHost starting: 
	I0919 20:05:05.026544   48464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:05:05.026584   48464 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:05:05.040914   48464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0919 20:05:05.041405   48464 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:05:05.041890   48464 main.go:141] libmachine: Using API Version  1
	I0919 20:05:05.041923   48464 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:05:05.042254   48464 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:05:05.042440   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:05:05.042600   48464 main.go:141] libmachine: (multinode-282812) Calling .GetState
	I0919 20:05:05.044126   48464 fix.go:112] recreateIfNeeded on multinode-282812: state=Running err=<nil>
	W0919 20:05:05.044157   48464 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 20:05:05.046083   48464 out.go:177] * Updating the running kvm2 "multinode-282812" VM ...
	I0919 20:05:05.047491   48464 machine.go:93] provisionDockerMachine start ...
	I0919 20:05:05.047508   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:05:05.047685   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.050193   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.050606   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.050658   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.050737   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:05:05.050896   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.051042   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.051169   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:05:05.051310   48464 main.go:141] libmachine: Using SSH client type: native
	I0919 20:05:05.051497   48464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0919 20:05:05.051509   48464 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 20:05:05.158128   48464 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-282812
	
	I0919 20:05:05.158159   48464 main.go:141] libmachine: (multinode-282812) Calling .GetMachineName
	I0919 20:05:05.158439   48464 buildroot.go:166] provisioning hostname "multinode-282812"
	I0919 20:05:05.158477   48464 main.go:141] libmachine: (multinode-282812) Calling .GetMachineName
	I0919 20:05:05.158646   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.161331   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.161681   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.161716   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.161853   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:05:05.162023   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.162174   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.162308   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:05:05.162452   48464 main.go:141] libmachine: Using SSH client type: native
	I0919 20:05:05.162674   48464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0919 20:05:05.162688   48464 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-282812 && echo "multinode-282812" | sudo tee /etc/hostname
	I0919 20:05:05.289376   48464 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-282812
	
	I0919 20:05:05.289403   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.291891   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.292241   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.292266   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.292441   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:05:05.292607   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.292762   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.292882   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:05:05.293025   48464 main.go:141] libmachine: Using SSH client type: native
	I0919 20:05:05.293197   48464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0919 20:05:05.293214   48464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-282812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-282812/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-282812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 20:05:05.398035   48464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 20:05:05.398063   48464 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 20:05:05.398098   48464 buildroot.go:174] setting up certificates
	I0919 20:05:05.398110   48464 provision.go:84] configureAuth start
	I0919 20:05:05.398121   48464 main.go:141] libmachine: (multinode-282812) Calling .GetMachineName
	I0919 20:05:05.398364   48464 main.go:141] libmachine: (multinode-282812) Calling .GetIP
	I0919 20:05:05.400918   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.401303   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.401339   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.401490   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.403663   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.404000   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.404032   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.404175   48464 provision.go:143] copyHostCerts
	I0919 20:05:05.404211   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 20:05:05.404250   48464 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 20:05:05.404260   48464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 20:05:05.404347   48464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 20:05:05.404439   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 20:05:05.404462   48464 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 20:05:05.404471   48464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 20:05:05.404511   48464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 20:05:05.404610   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 20:05:05.404633   48464 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 20:05:05.404643   48464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 20:05:05.404675   48464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 20:05:05.404735   48464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.multinode-282812 san=[127.0.0.1 192.168.39.87 localhost minikube multinode-282812]
	I0919 20:05:05.624537   48464 provision.go:177] copyRemoteCerts
	I0919 20:05:05.624599   48464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 20:05:05.624624   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.627185   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.627633   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.627657   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.627771   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:05:05.627965   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.628111   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:05:05.628266   48464 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/multinode-282812/id_rsa Username:docker}
	I0919 20:05:05.712939   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 20:05:05.713015   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 20:05:05.737711   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 20:05:05.737792   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0919 20:05:05.765191   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 20:05:05.765271   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 20:05:05.792462   48464 provision.go:87] duration metric: took 394.339291ms to configureAuth
	I0919 20:05:05.792505   48464 buildroot.go:189] setting minikube options for container-runtime
	I0919 20:05:05.792741   48464 config.go:182] Loaded profile config "multinode-282812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 20:05:05.792822   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:05:05.795515   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.795847   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:05:05.795900   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:05:05.796064   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:05:05.796241   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.796381   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:05:05.796492   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:05:05.796616   48464 main.go:141] libmachine: Using SSH client type: native
	I0919 20:05:05.796769   48464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0919 20:05:05.796781   48464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 20:06:36.442745   48464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 20:06:36.442769   48464 machine.go:96] duration metric: took 1m31.395267139s to provisionDockerMachine
	I0919 20:06:36.442782   48464 start.go:293] postStartSetup for "multinode-282812" (driver="kvm2")
	I0919 20:06:36.442794   48464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 20:06:36.442810   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:06:36.443118   48464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 20:06:36.443155   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:06:36.446327   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.446806   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:36.446836   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.447014   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:06:36.447197   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:06:36.447340   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:06:36.447454   48464 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/multinode-282812/id_rsa Username:docker}
	I0919 20:06:36.536460   48464 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 20:06:36.541277   48464 command_runner.go:130] > NAME=Buildroot
	I0919 20:06:36.541302   48464 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0919 20:06:36.541308   48464 command_runner.go:130] > ID=buildroot
	I0919 20:06:36.541315   48464 command_runner.go:130] > VERSION_ID=2023.02.9
	I0919 20:06:36.541323   48464 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0919 20:06:36.541370   48464 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 20:06:36.541388   48464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 20:06:36.541452   48464 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 20:06:36.541536   48464 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 20:06:36.541549   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /etc/ssl/certs/151162.pem
	I0919 20:06:36.541654   48464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 20:06:36.551091   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 20:06:36.575189   48464 start.go:296] duration metric: took 132.393548ms for postStartSetup
	I0919 20:06:36.575231   48464 fix.go:56] duration metric: took 1m31.548980366s for fixHost
	I0919 20:06:36.575255   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:06:36.578159   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.578637   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:36.578662   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.578801   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:06:36.579038   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:06:36.579198   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:06:36.579293   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:06:36.579419   48464 main.go:141] libmachine: Using SSH client type: native
	I0919 20:06:36.579629   48464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0919 20:06:36.579644   48464 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 20:06:36.682025   48464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726776396.653021410
	
	I0919 20:06:36.682048   48464 fix.go:216] guest clock: 1726776396.653021410
	I0919 20:06:36.682055   48464 fix.go:229] Guest: 2024-09-19 20:06:36.65302141 +0000 UTC Remote: 2024-09-19 20:06:36.575235071 +0000 UTC m=+91.675920701 (delta=77.786339ms)
	I0919 20:06:36.682074   48464 fix.go:200] guest clock delta is within tolerance: 77.786339ms
	I0919 20:06:36.682080   48464 start.go:83] releasing machines lock for "multinode-282812", held for 1m31.655843579s
	I0919 20:06:36.682102   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:06:36.682357   48464 main.go:141] libmachine: (multinode-282812) Calling .GetIP
	I0919 20:06:36.685220   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.685559   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:36.685581   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.685823   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:06:36.686318   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:06:36.686462   48464 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:06:36.686560   48464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 20:06:36.686608   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:06:36.686665   48464 ssh_runner.go:195] Run: cat /version.json
	I0919 20:06:36.686699   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:06:36.689288   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.689610   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:36.689646   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.689682   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.689788   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:06:36.689977   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:06:36.690028   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:36.690051   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:36.690128   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:06:36.690209   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:06:36.690264   48464 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/multinode-282812/id_rsa Username:docker}
	I0919 20:06:36.690327   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:06:36.690445   48464 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:06:36.690576   48464 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/multinode-282812/id_rsa Username:docker}
	I0919 20:06:36.766617   48464 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0919 20:06:36.766750   48464 ssh_runner.go:195] Run: systemctl --version
	I0919 20:06:36.791866   48464 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0919 20:06:36.791926   48464 command_runner.go:130] > systemd 252 (252)
	I0919 20:06:36.791946   48464 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0919 20:06:36.791998   48464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 20:06:36.950306   48464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 20:06:36.957247   48464 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0919 20:06:36.957345   48464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 20:06:36.957422   48464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 20:06:36.968377   48464 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 20:06:36.968403   48464 start.go:495] detecting cgroup driver to use...
	I0919 20:06:36.968462   48464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 20:06:36.985251   48464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 20:06:36.999791   48464 docker.go:217] disabling cri-docker service (if available) ...
	I0919 20:06:36.999840   48464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 20:06:37.014146   48464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 20:06:37.028337   48464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 20:06:37.167281   48464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 20:06:37.304440   48464 docker.go:233] disabling docker service ...
	I0919 20:06:37.304501   48464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 20:06:37.321183   48464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 20:06:37.335315   48464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 20:06:37.474457   48464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 20:06:37.613514   48464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 20:06:37.627754   48464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 20:06:37.646823   48464 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0919 20:06:37.647388   48464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 20:06:37.647464   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.658169   48464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 20:06:37.658232   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.668479   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.679361   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.689764   48464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 20:06:37.700568   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.711108   48464 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.722868   48464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:06:37.733532   48464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 20:06:37.743352   48464 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0919 20:06:37.743441   48464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 20:06:37.753726   48464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:06:37.892610   48464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 20:06:38.098090   48464 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 20:06:38.098186   48464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 20:06:38.104256   48464 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0919 20:06:38.104283   48464 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0919 20:06:38.104303   48464 command_runner.go:130] > Device: 0,22	Inode: 1313        Links: 1
	I0919 20:06:38.104313   48464 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 20:06:38.104321   48464 command_runner.go:130] > Access: 2024-09-19 20:06:38.034913323 +0000
	I0919 20:06:38.104329   48464 command_runner.go:130] > Modify: 2024-09-19 20:06:37.955911284 +0000
	I0919 20:06:38.104334   48464 command_runner.go:130] > Change: 2024-09-19 20:06:37.955911284 +0000
	I0919 20:06:38.104360   48464 command_runner.go:130] >  Birth: -
	I0919 20:06:38.104393   48464 start.go:563] Will wait 60s for crictl version
	I0919 20:06:38.104431   48464 ssh_runner.go:195] Run: which crictl
	I0919 20:06:38.108252   48464 command_runner.go:130] > /usr/bin/crictl
	I0919 20:06:38.108358   48464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 20:06:38.150443   48464 command_runner.go:130] > Version:  0.1.0
	I0919 20:06:38.150470   48464 command_runner.go:130] > RuntimeName:  cri-o
	I0919 20:06:38.150474   48464 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0919 20:06:38.150479   48464 command_runner.go:130] > RuntimeApiVersion:  v1
	I0919 20:06:38.150498   48464 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 20:06:38.150563   48464 ssh_runner.go:195] Run: crio --version
	I0919 20:06:38.178865   48464 command_runner.go:130] > crio version 1.29.1
	I0919 20:06:38.178889   48464 command_runner.go:130] > Version:        1.29.1
	I0919 20:06:38.178895   48464 command_runner.go:130] > GitCommit:      unknown
	I0919 20:06:38.178899   48464 command_runner.go:130] > GitCommitDate:  unknown
	I0919 20:06:38.178903   48464 command_runner.go:130] > GitTreeState:   clean
	I0919 20:06:38.178908   48464 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0919 20:06:38.178912   48464 command_runner.go:130] > GoVersion:      go1.21.6
	I0919 20:06:38.178918   48464 command_runner.go:130] > Compiler:       gc
	I0919 20:06:38.178950   48464 command_runner.go:130] > Platform:       linux/amd64
	I0919 20:06:38.178957   48464 command_runner.go:130] > Linkmode:       dynamic
	I0919 20:06:38.178966   48464 command_runner.go:130] > BuildTags:      
	I0919 20:06:38.178970   48464 command_runner.go:130] >   containers_image_ostree_stub
	I0919 20:06:38.178974   48464 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0919 20:06:38.178978   48464 command_runner.go:130] >   btrfs_noversion
	I0919 20:06:38.178982   48464 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0919 20:06:38.178987   48464 command_runner.go:130] >   libdm_no_deferred_remove
	I0919 20:06:38.178991   48464 command_runner.go:130] >   seccomp
	I0919 20:06:38.178995   48464 command_runner.go:130] > LDFlags:          unknown
	I0919 20:06:38.178999   48464 command_runner.go:130] > SeccompEnabled:   true
	I0919 20:06:38.179004   48464 command_runner.go:130] > AppArmorEnabled:  false
	I0919 20:06:38.180289   48464 ssh_runner.go:195] Run: crio --version
	I0919 20:06:38.207944   48464 command_runner.go:130] > crio version 1.29.1
	I0919 20:06:38.207966   48464 command_runner.go:130] > Version:        1.29.1
	I0919 20:06:38.207972   48464 command_runner.go:130] > GitCommit:      unknown
	I0919 20:06:38.207976   48464 command_runner.go:130] > GitCommitDate:  unknown
	I0919 20:06:38.207979   48464 command_runner.go:130] > GitTreeState:   clean
	I0919 20:06:38.207985   48464 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0919 20:06:38.207989   48464 command_runner.go:130] > GoVersion:      go1.21.6
	I0919 20:06:38.207993   48464 command_runner.go:130] > Compiler:       gc
	I0919 20:06:38.207997   48464 command_runner.go:130] > Platform:       linux/amd64
	I0919 20:06:38.208001   48464 command_runner.go:130] > Linkmode:       dynamic
	I0919 20:06:38.208005   48464 command_runner.go:130] > BuildTags:      
	I0919 20:06:38.208009   48464 command_runner.go:130] >   containers_image_ostree_stub
	I0919 20:06:38.208013   48464 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0919 20:06:38.208017   48464 command_runner.go:130] >   btrfs_noversion
	I0919 20:06:38.208021   48464 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0919 20:06:38.208025   48464 command_runner.go:130] >   libdm_no_deferred_remove
	I0919 20:06:38.208035   48464 command_runner.go:130] >   seccomp
	I0919 20:06:38.208039   48464 command_runner.go:130] > LDFlags:          unknown
	I0919 20:06:38.208043   48464 command_runner.go:130] > SeccompEnabled:   true
	I0919 20:06:38.208047   48464 command_runner.go:130] > AppArmorEnabled:  false
	I0919 20:06:38.211072   48464 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 20:06:38.212515   48464 main.go:141] libmachine: (multinode-282812) Calling .GetIP
	I0919 20:06:38.215101   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:38.215389   48464 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:06:38.215415   48464 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:06:38.215611   48464 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 20:06:38.220063   48464 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0919 20:06:38.220151   48464 kubeadm.go:883] updating cluster {Name:multinode-282812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-282812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 20:06:38.220385   48464 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 20:06:38.220472   48464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:06:38.260376   48464 command_runner.go:130] > {
	I0919 20:06:38.260404   48464 command_runner.go:130] >   "images": [
	I0919 20:06:38.260408   48464 command_runner.go:130] >     {
	I0919 20:06:38.260415   48464 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0919 20:06:38.260420   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.260426   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0919 20:06:38.260436   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260441   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.260452   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0919 20:06:38.260473   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0919 20:06:38.260480   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260487   48464 command_runner.go:130] >       "size": "87190579",
	I0919 20:06:38.260493   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.260497   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.260502   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.260509   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.260512   48464 command_runner.go:130] >     },
	I0919 20:06:38.260515   48464 command_runner.go:130] >     {
	I0919 20:06:38.260523   48464 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0919 20:06:38.260527   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.260534   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0919 20:06:38.260545   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260557   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.260568   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0919 20:06:38.260582   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0919 20:06:38.260589   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260599   48464 command_runner.go:130] >       "size": "1363676",
	I0919 20:06:38.260603   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.260612   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.260616   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.260621   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.260629   48464 command_runner.go:130] >     },
	I0919 20:06:38.260636   48464 command_runner.go:130] >     {
	I0919 20:06:38.260649   48464 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0919 20:06:38.260659   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.260669   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0919 20:06:38.260678   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260685   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.260697   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0919 20:06:38.260706   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0919 20:06:38.260714   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260721   48464 command_runner.go:130] >       "size": "31470524",
	I0919 20:06:38.260731   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.260738   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.260747   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.260753   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.260761   48464 command_runner.go:130] >     },
	I0919 20:06:38.260767   48464 command_runner.go:130] >     {
	I0919 20:06:38.260779   48464 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0919 20:06:38.260784   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.260789   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0919 20:06:38.260798   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260805   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.260819   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0919 20:06:38.260847   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0919 20:06:38.260865   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260871   48464 command_runner.go:130] >       "size": "63273227",
	I0919 20:06:38.260875   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.260883   48464 command_runner.go:130] >       "username": "nonroot",
	I0919 20:06:38.260890   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.260900   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.260909   48464 command_runner.go:130] >     },
	I0919 20:06:38.260914   48464 command_runner.go:130] >     {
	I0919 20:06:38.260927   48464 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0919 20:06:38.260936   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.260947   48464 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0919 20:06:38.260954   48464 command_runner.go:130] >       ],
	I0919 20:06:38.260958   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.260970   48464 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0919 20:06:38.260984   48464 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0919 20:06:38.260993   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261001   48464 command_runner.go:130] >       "size": "149009664",
	I0919 20:06:38.261009   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.261018   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.261026   48464 command_runner.go:130] >       },
	I0919 20:06:38.261032   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261040   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261043   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.261048   48464 command_runner.go:130] >     },
	I0919 20:06:38.261056   48464 command_runner.go:130] >     {
	I0919 20:06:38.261075   48464 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0919 20:06:38.261085   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.261103   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0919 20:06:38.261112   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261122   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.261130   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0919 20:06:38.261144   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0919 20:06:38.261157   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261165   48464 command_runner.go:130] >       "size": "95237600",
	I0919 20:06:38.261174   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.261180   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.261188   48464 command_runner.go:130] >       },
	I0919 20:06:38.261195   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261204   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261210   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.261216   48464 command_runner.go:130] >     },
	I0919 20:06:38.261221   48464 command_runner.go:130] >     {
	I0919 20:06:38.261233   48464 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0919 20:06:38.261243   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.261253   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0919 20:06:38.261261   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261268   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.261282   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0919 20:06:38.261296   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0919 20:06:38.261302   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261307   48464 command_runner.go:130] >       "size": "89437508",
	I0919 20:06:38.261316   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.261323   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.261331   48464 command_runner.go:130] >       },
	I0919 20:06:38.261337   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261347   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261354   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.261363   48464 command_runner.go:130] >     },
	I0919 20:06:38.261368   48464 command_runner.go:130] >     {
	I0919 20:06:38.261380   48464 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0919 20:06:38.261388   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.261395   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0919 20:06:38.261404   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261412   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.261443   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0919 20:06:38.261466   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0919 20:06:38.261472   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261479   48464 command_runner.go:130] >       "size": "92733849",
	I0919 20:06:38.261489   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.261498   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261504   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261514   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.261520   48464 command_runner.go:130] >     },
	I0919 20:06:38.261525   48464 command_runner.go:130] >     {
	I0919 20:06:38.261535   48464 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0919 20:06:38.261541   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.261549   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0919 20:06:38.261552   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261556   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.261566   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0919 20:06:38.261578   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0919 20:06:38.261583   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261590   48464 command_runner.go:130] >       "size": "68420934",
	I0919 20:06:38.261596   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.261602   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.261607   48464 command_runner.go:130] >       },
	I0919 20:06:38.261614   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261620   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261625   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.261630   48464 command_runner.go:130] >     },
	I0919 20:06:38.261636   48464 command_runner.go:130] >     {
	I0919 20:06:38.261644   48464 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0919 20:06:38.261653   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.261660   48464 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0919 20:06:38.261669   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261676   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.261686   48464 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0919 20:06:38.261700   48464 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0919 20:06:38.261714   48464 command_runner.go:130] >       ],
	I0919 20:06:38.261723   48464 command_runner.go:130] >       "size": "742080",
	I0919 20:06:38.261727   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.261732   48464 command_runner.go:130] >         "value": "65535"
	I0919 20:06:38.261736   48464 command_runner.go:130] >       },
	I0919 20:06:38.261746   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.261753   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.261763   48464 command_runner.go:130] >       "pinned": true
	I0919 20:06:38.261770   48464 command_runner.go:130] >     }
	I0919 20:06:38.261776   48464 command_runner.go:130] >   ]
	I0919 20:06:38.261784   48464 command_runner.go:130] > }
	I0919 20:06:38.262034   48464 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 20:06:38.262052   48464 crio.go:433] Images already preloaded, skipping extraction
	I0919 20:06:38.262128   48464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:06:38.295069   48464 command_runner.go:130] > {
	I0919 20:06:38.295098   48464 command_runner.go:130] >   "images": [
	I0919 20:06:38.295110   48464 command_runner.go:130] >     {
	I0919 20:06:38.295118   48464 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0919 20:06:38.295123   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295128   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0919 20:06:38.295132   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295136   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295144   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0919 20:06:38.295150   48464 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0919 20:06:38.295154   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295161   48464 command_runner.go:130] >       "size": "87190579",
	I0919 20:06:38.295168   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.295174   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.295185   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295199   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295208   48464 command_runner.go:130] >     },
	I0919 20:06:38.295212   48464 command_runner.go:130] >     {
	I0919 20:06:38.295218   48464 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0919 20:06:38.295222   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295228   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0919 20:06:38.295231   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295236   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295244   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0919 20:06:38.295258   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0919 20:06:38.295268   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295273   48464 command_runner.go:130] >       "size": "1363676",
	I0919 20:06:38.295281   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.295293   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.295302   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295308   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295313   48464 command_runner.go:130] >     },
	I0919 20:06:38.295316   48464 command_runner.go:130] >     {
	I0919 20:06:38.295322   48464 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0919 20:06:38.295326   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295334   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0919 20:06:38.295343   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295349   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295364   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0919 20:06:38.295376   48464 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0919 20:06:38.295385   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295394   48464 command_runner.go:130] >       "size": "31470524",
	I0919 20:06:38.295401   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.295407   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.295411   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295418   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295426   48464 command_runner.go:130] >     },
	I0919 20:06:38.295435   48464 command_runner.go:130] >     {
	I0919 20:06:38.295451   48464 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0919 20:06:38.295471   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295479   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0919 20:06:38.295487   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295494   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295504   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0919 20:06:38.295526   48464 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0919 20:06:38.295535   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295544   48464 command_runner.go:130] >       "size": "63273227",
	I0919 20:06:38.295553   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.295564   48464 command_runner.go:130] >       "username": "nonroot",
	I0919 20:06:38.295573   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295579   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295583   48464 command_runner.go:130] >     },
	I0919 20:06:38.295591   48464 command_runner.go:130] >     {
	I0919 20:06:38.295602   48464 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0919 20:06:38.295611   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295619   48464 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0919 20:06:38.295628   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295634   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295648   48464 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0919 20:06:38.295665   48464 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0919 20:06:38.295672   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295679   48464 command_runner.go:130] >       "size": "149009664",
	I0919 20:06:38.295689   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.295696   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.295704   48464 command_runner.go:130] >       },
	I0919 20:06:38.295710   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.295719   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295725   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295733   48464 command_runner.go:130] >     },
	I0919 20:06:38.295738   48464 command_runner.go:130] >     {
	I0919 20:06:38.295748   48464 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0919 20:06:38.295763   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295775   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0919 20:06:38.295783   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295791   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295805   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0919 20:06:38.295819   48464 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0919 20:06:38.295828   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295835   48464 command_runner.go:130] >       "size": "95237600",
	I0919 20:06:38.295844   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.295853   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.295861   48464 command_runner.go:130] >       },
	I0919 20:06:38.295871   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.295879   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.295889   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.295896   48464 command_runner.go:130] >     },
	I0919 20:06:38.295903   48464 command_runner.go:130] >     {
	I0919 20:06:38.295912   48464 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0919 20:06:38.295922   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.295934   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0919 20:06:38.295942   48464 command_runner.go:130] >       ],
	I0919 20:06:38.295951   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.295969   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0919 20:06:38.295983   48464 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0919 20:06:38.295995   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296005   48464 command_runner.go:130] >       "size": "89437508",
	I0919 20:06:38.296011   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.296021   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.296027   48464 command_runner.go:130] >       },
	I0919 20:06:38.296036   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.296042   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.296051   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.296056   48464 command_runner.go:130] >     },
	I0919 20:06:38.296062   48464 command_runner.go:130] >     {
	I0919 20:06:38.296082   48464 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0919 20:06:38.296092   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.296100   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0919 20:06:38.296108   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296115   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.296142   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0919 20:06:38.296156   48464 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0919 20:06:38.296165   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296175   48464 command_runner.go:130] >       "size": "92733849",
	I0919 20:06:38.296183   48464 command_runner.go:130] >       "uid": null,
	I0919 20:06:38.296192   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.296199   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.296208   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.296216   48464 command_runner.go:130] >     },
	I0919 20:06:38.296221   48464 command_runner.go:130] >     {
	I0919 20:06:38.296227   48464 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0919 20:06:38.296235   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.296246   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0919 20:06:38.296255   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296261   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.296276   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0919 20:06:38.296289   48464 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0919 20:06:38.296297   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296303   48464 command_runner.go:130] >       "size": "68420934",
	I0919 20:06:38.296309   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.296315   48464 command_runner.go:130] >         "value": "0"
	I0919 20:06:38.296323   48464 command_runner.go:130] >       },
	I0919 20:06:38.296332   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.296342   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.296348   48464 command_runner.go:130] >       "pinned": false
	I0919 20:06:38.296355   48464 command_runner.go:130] >     },
	I0919 20:06:38.296361   48464 command_runner.go:130] >     {
	I0919 20:06:38.296373   48464 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0919 20:06:38.296389   48464 command_runner.go:130] >       "repoTags": [
	I0919 20:06:38.296396   48464 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0919 20:06:38.296400   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296409   48464 command_runner.go:130] >       "repoDigests": [
	I0919 20:06:38.296423   48464 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0919 20:06:38.296441   48464 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0919 20:06:38.296449   48464 command_runner.go:130] >       ],
	I0919 20:06:38.296460   48464 command_runner.go:130] >       "size": "742080",
	I0919 20:06:38.296469   48464 command_runner.go:130] >       "uid": {
	I0919 20:06:38.296477   48464 command_runner.go:130] >         "value": "65535"
	I0919 20:06:38.296482   48464 command_runner.go:130] >       },
	I0919 20:06:38.296489   48464 command_runner.go:130] >       "username": "",
	I0919 20:06:38.296497   48464 command_runner.go:130] >       "spec": null,
	I0919 20:06:38.296507   48464 command_runner.go:130] >       "pinned": true
	I0919 20:06:38.296515   48464 command_runner.go:130] >     }
	I0919 20:06:38.296521   48464 command_runner.go:130] >   ]
	I0919 20:06:38.296529   48464 command_runner.go:130] > }
	I0919 20:06:38.296683   48464 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 20:06:38.296697   48464 cache_images.go:84] Images are preloaded, skipping loading
	I0919 20:06:38.296706   48464 kubeadm.go:934] updating node { 192.168.39.87 8443 v1.31.1 crio true true} ...
	I0919 20:06:38.296823   48464 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-282812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-282812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 20:06:38.296900   48464 ssh_runner.go:195] Run: crio config
	I0919 20:06:38.338915   48464 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0919 20:06:38.338948   48464 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0919 20:06:38.338966   48464 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0919 20:06:38.338979   48464 command_runner.go:130] > #
	I0919 20:06:38.338991   48464 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0919 20:06:38.338997   48464 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0919 20:06:38.339006   48464 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0919 20:06:38.339015   48464 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0919 20:06:38.339020   48464 command_runner.go:130] > # reload'.
	I0919 20:06:38.339030   48464 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0919 20:06:38.339043   48464 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0919 20:06:38.339053   48464 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0919 20:06:38.339065   48464 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0919 20:06:38.339072   48464 command_runner.go:130] > [crio]
	I0919 20:06:38.339081   48464 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0919 20:06:38.339088   48464 command_runner.go:130] > # containers images, in this directory.
	I0919 20:06:38.339094   48464 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0919 20:06:38.339106   48464 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0919 20:06:38.339182   48464 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0919 20:06:38.339200   48464 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0919 20:06:38.339365   48464 command_runner.go:130] > # imagestore = ""
	I0919 20:06:38.339376   48464 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0919 20:06:38.339382   48464 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0919 20:06:38.339476   48464 command_runner.go:130] > storage_driver = "overlay"
	I0919 20:06:38.339490   48464 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0919 20:06:38.339499   48464 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0919 20:06:38.339506   48464 command_runner.go:130] > storage_option = [
	I0919 20:06:38.339643   48464 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0919 20:06:38.339667   48464 command_runner.go:130] > ]
	I0919 20:06:38.339678   48464 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0919 20:06:38.339691   48464 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0919 20:06:38.340017   48464 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0919 20:06:38.340033   48464 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0919 20:06:38.340043   48464 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0919 20:06:38.340051   48464 command_runner.go:130] > # always happen on a node reboot
	I0919 20:06:38.340298   48464 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0919 20:06:38.340326   48464 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0919 20:06:38.340336   48464 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0919 20:06:38.340347   48464 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0919 20:06:38.340491   48464 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0919 20:06:38.340506   48464 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0919 20:06:38.340519   48464 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0919 20:06:38.340875   48464 command_runner.go:130] > # internal_wipe = true
	I0919 20:06:38.340889   48464 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0919 20:06:38.340898   48464 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0919 20:06:38.341212   48464 command_runner.go:130] > # internal_repair = false
	I0919 20:06:38.341228   48464 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0919 20:06:38.341238   48464 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0919 20:06:38.341251   48464 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0919 20:06:38.341465   48464 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0919 20:06:38.341475   48464 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0919 20:06:38.341479   48464 command_runner.go:130] > [crio.api]
	I0919 20:06:38.341492   48464 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0919 20:06:38.341771   48464 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0919 20:06:38.341786   48464 command_runner.go:130] > # IP address on which the stream server will listen.
	I0919 20:06:38.342118   48464 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0919 20:06:38.342126   48464 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0919 20:06:38.342132   48464 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0919 20:06:38.342386   48464 command_runner.go:130] > # stream_port = "0"
	I0919 20:06:38.342394   48464 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0919 20:06:38.342636   48464 command_runner.go:130] > # stream_enable_tls = false
	I0919 20:06:38.342645   48464 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0919 20:06:38.342915   48464 command_runner.go:130] > # stream_idle_timeout = ""
	I0919 20:06:38.342924   48464 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0919 20:06:38.342930   48464 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0919 20:06:38.342933   48464 command_runner.go:130] > # minutes.
	I0919 20:06:38.343166   48464 command_runner.go:130] > # stream_tls_cert = ""
	I0919 20:06:38.343175   48464 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0919 20:06:38.343181   48464 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0919 20:06:38.343393   48464 command_runner.go:130] > # stream_tls_key = ""
	I0919 20:06:38.343402   48464 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0919 20:06:38.343408   48464 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0919 20:06:38.343425   48464 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0919 20:06:38.343629   48464 command_runner.go:130] > # stream_tls_ca = ""
	I0919 20:06:38.343640   48464 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0919 20:06:38.343847   48464 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0919 20:06:38.343857   48464 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0919 20:06:38.343982   48464 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0919 20:06:38.343991   48464 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0919 20:06:38.343996   48464 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0919 20:06:38.344000   48464 command_runner.go:130] > [crio.runtime]
	I0919 20:06:38.344005   48464 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0919 20:06:38.344013   48464 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0919 20:06:38.344017   48464 command_runner.go:130] > # "nofile=1024:2048"
	I0919 20:06:38.344024   48464 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0919 20:06:38.344187   48464 command_runner.go:130] > # default_ulimits = [
	I0919 20:06:38.344312   48464 command_runner.go:130] > # ]
	I0919 20:06:38.344320   48464 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0919 20:06:38.344598   48464 command_runner.go:130] > # no_pivot = false
	I0919 20:06:38.344607   48464 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0919 20:06:38.344613   48464 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0919 20:06:38.344913   48464 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0919 20:06:38.344928   48464 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0919 20:06:38.344933   48464 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0919 20:06:38.344939   48464 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 20:06:38.345044   48464 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0919 20:06:38.345057   48464 command_runner.go:130] > # Cgroup setting for conmon
	I0919 20:06:38.345075   48464 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0919 20:06:38.345268   48464 command_runner.go:130] > conmon_cgroup = "pod"
	I0919 20:06:38.345286   48464 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0919 20:06:38.345296   48464 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0919 20:06:38.345309   48464 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 20:06:38.345318   48464 command_runner.go:130] > conmon_env = [
	I0919 20:06:38.345393   48464 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0919 20:06:38.345482   48464 command_runner.go:130] > ]
	I0919 20:06:38.345495   48464 command_runner.go:130] > # Additional environment variables to set for all the
	I0919 20:06:38.345503   48464 command_runner.go:130] > # containers. These are overridden if set in the
	I0919 20:06:38.345515   48464 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0919 20:06:38.345614   48464 command_runner.go:130] > # default_env = [
	I0919 20:06:38.345863   48464 command_runner.go:130] > # ]
	I0919 20:06:38.345873   48464 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0919 20:06:38.345880   48464 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0919 20:06:38.346232   48464 command_runner.go:130] > # selinux = false
	I0919 20:06:38.346241   48464 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0919 20:06:38.346247   48464 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0919 20:06:38.346252   48464 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0919 20:06:38.346474   48464 command_runner.go:130] > # seccomp_profile = ""
	I0919 20:06:38.346483   48464 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0919 20:06:38.346489   48464 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0919 20:06:38.346498   48464 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0919 20:06:38.346505   48464 command_runner.go:130] > # which might increase security.
	I0919 20:06:38.346510   48464 command_runner.go:130] > # This option is currently deprecated,
	I0919 20:06:38.346518   48464 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0919 20:06:38.346611   48464 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0919 20:06:38.346619   48464 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0919 20:06:38.346625   48464 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0919 20:06:38.346631   48464 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0919 20:06:38.346637   48464 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0919 20:06:38.346644   48464 command_runner.go:130] > # This option supports live configuration reload.
	I0919 20:06:38.346986   48464 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0919 20:06:38.346994   48464 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0919 20:06:38.346998   48464 command_runner.go:130] > # the cgroup blockio controller.
	I0919 20:06:38.347261   48464 command_runner.go:130] > # blockio_config_file = ""
	I0919 20:06:38.347281   48464 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0919 20:06:38.347289   48464 command_runner.go:130] > # blockio parameters.
	I0919 20:06:38.347929   48464 command_runner.go:130] > # blockio_reload = false
	I0919 20:06:38.347944   48464 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0919 20:06:38.347948   48464 command_runner.go:130] > # irqbalance daemon.
	I0919 20:06:38.347953   48464 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0919 20:06:38.347959   48464 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0919 20:06:38.347965   48464 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0919 20:06:38.348048   48464 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0919 20:06:38.348184   48464 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0919 20:06:38.348199   48464 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0919 20:06:38.348208   48464 command_runner.go:130] > # This option supports live configuration reload.
	I0919 20:06:38.348215   48464 command_runner.go:130] > # rdt_config_file = ""
	I0919 20:06:38.348233   48464 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0919 20:06:38.348242   48464 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0919 20:06:38.348292   48464 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0919 20:06:38.348304   48464 command_runner.go:130] > # separate_pull_cgroup = ""
	I0919 20:06:38.348323   48464 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0919 20:06:38.348361   48464 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0919 20:06:38.348382   48464 command_runner.go:130] > # will be added.
	I0919 20:06:38.348390   48464 command_runner.go:130] > # default_capabilities = [
	I0919 20:06:38.348401   48464 command_runner.go:130] > # 	"CHOWN",
	I0919 20:06:38.348409   48464 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0919 20:06:38.348416   48464 command_runner.go:130] > # 	"FSETID",
	I0919 20:06:38.348422   48464 command_runner.go:130] > # 	"FOWNER",
	I0919 20:06:38.348434   48464 command_runner.go:130] > # 	"SETGID",
	I0919 20:06:38.348439   48464 command_runner.go:130] > # 	"SETUID",
	I0919 20:06:38.348445   48464 command_runner.go:130] > # 	"SETPCAP",
	I0919 20:06:38.348451   48464 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0919 20:06:38.348457   48464 command_runner.go:130] > # 	"KILL",
	I0919 20:06:38.348462   48464 command_runner.go:130] > # ]
	I0919 20:06:38.348479   48464 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0919 20:06:38.348489   48464 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0919 20:06:38.348498   48464 command_runner.go:130] > # add_inheritable_capabilities = false
	I0919 20:06:38.348513   48464 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0919 20:06:38.348523   48464 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 20:06:38.348529   48464 command_runner.go:130] > default_sysctls = [
	I0919 20:06:38.348547   48464 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0919 20:06:38.348553   48464 command_runner.go:130] > ]
	I0919 20:06:38.348560   48464 command_runner.go:130] > # List of devices on the host that a
	I0919 20:06:38.348571   48464 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0919 20:06:38.348578   48464 command_runner.go:130] > # allowed_devices = [
	I0919 20:06:38.348597   48464 command_runner.go:130] > # 	"/dev/fuse",
	I0919 20:06:38.348606   48464 command_runner.go:130] > # ]
	I0919 20:06:38.348614   48464 command_runner.go:130] > # List of additional devices. specified as
	I0919 20:06:38.348626   48464 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0919 20:06:38.348641   48464 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0919 20:06:38.348650   48464 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 20:06:38.348657   48464 command_runner.go:130] > # additional_devices = [
	I0919 20:06:38.348662   48464 command_runner.go:130] > # ]
	I0919 20:06:38.348676   48464 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0919 20:06:38.348682   48464 command_runner.go:130] > # cdi_spec_dirs = [
	I0919 20:06:38.348688   48464 command_runner.go:130] > # 	"/etc/cdi",
	I0919 20:06:38.348694   48464 command_runner.go:130] > # 	"/var/run/cdi",
	I0919 20:06:38.348700   48464 command_runner.go:130] > # ]
	I0919 20:06:38.348715   48464 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0919 20:06:38.348725   48464 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0919 20:06:38.348732   48464 command_runner.go:130] > # Defaults to false.
	I0919 20:06:38.348740   48464 command_runner.go:130] > # device_ownership_from_security_context = false
	I0919 20:06:38.348755   48464 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0919 20:06:38.348764   48464 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0919 20:06:38.348770   48464 command_runner.go:130] > # hooks_dir = [
	I0919 20:06:38.348783   48464 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0919 20:06:38.348790   48464 command_runner.go:130] > # ]
	I0919 20:06:38.348799   48464 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0919 20:06:38.348814   48464 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0919 20:06:38.348821   48464 command_runner.go:130] > # its default mounts from the following two files:
	I0919 20:06:38.348827   48464 command_runner.go:130] > #
	I0919 20:06:38.348836   48464 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0919 20:06:38.348932   48464 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0919 20:06:38.348959   48464 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0919 20:06:38.348965   48464 command_runner.go:130] > #
	I0919 20:06:38.348982   48464 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0919 20:06:38.348992   48464 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0919 20:06:38.349002   48464 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0919 20:06:38.349017   48464 command_runner.go:130] > #      only add mounts it finds in this file.
	I0919 20:06:38.349022   48464 command_runner.go:130] > #
	I0919 20:06:38.349031   48464 command_runner.go:130] > # default_mounts_file = ""
	I0919 20:06:38.349040   48464 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0919 20:06:38.349121   48464 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0919 20:06:38.349138   48464 command_runner.go:130] > pids_limit = 1024
	I0919 20:06:38.349148   48464 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0919 20:06:38.349162   48464 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0919 20:06:38.349176   48464 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0919 20:06:38.349197   48464 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0919 20:06:38.349210   48464 command_runner.go:130] > # log_size_max = -1
	I0919 20:06:38.349221   48464 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0919 20:06:38.349230   48464 command_runner.go:130] > # log_to_journald = false
	I0919 20:06:38.349245   48464 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0919 20:06:38.349257   48464 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0919 20:06:38.349265   48464 command_runner.go:130] > # Path to directory for container attach sockets.
	I0919 20:06:38.349278   48464 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0919 20:06:38.349286   48464 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0919 20:06:38.349292   48464 command_runner.go:130] > # bind_mount_prefix = ""
	I0919 20:06:38.349301   48464 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0919 20:06:38.349312   48464 command_runner.go:130] > # read_only = false
	I0919 20:06:38.349322   48464 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0919 20:06:38.349331   48464 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0919 20:06:38.349353   48464 command_runner.go:130] > # live configuration reload.
	I0919 20:06:38.349361   48464 command_runner.go:130] > # log_level = "info"
	I0919 20:06:38.349370   48464 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0919 20:06:38.349378   48464 command_runner.go:130] > # This option supports live configuration reload.
	I0919 20:06:38.349390   48464 command_runner.go:130] > # log_filter = ""
	I0919 20:06:38.349404   48464 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0919 20:06:38.349414   48464 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0919 20:06:38.349419   48464 command_runner.go:130] > # separated by comma.
	I0919 20:06:38.349436   48464 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0919 20:06:38.349442   48464 command_runner.go:130] > # uid_mappings = ""
	I0919 20:06:38.349451   48464 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0919 20:06:38.349465   48464 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0919 20:06:38.349471   48464 command_runner.go:130] > # separated by comma.
	I0919 20:06:38.349489   48464 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0919 20:06:38.349494   48464 command_runner.go:130] > # gid_mappings = ""
	I0919 20:06:38.349503   48464 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0919 20:06:38.349513   48464 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 20:06:38.349529   48464 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 20:06:38.349541   48464 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0919 20:06:38.349553   48464 command_runner.go:130] > # minimum_mappable_uid = -1
	I0919 20:06:38.349562   48464 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0919 20:06:38.349572   48464 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 20:06:38.349585   48464 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 20:06:38.349596   48464 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0919 20:06:38.349602   48464 command_runner.go:130] > # minimum_mappable_gid = -1
	I0919 20:06:38.349658   48464 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0919 20:06:38.349671   48464 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0919 20:06:38.349681   48464 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0919 20:06:38.349688   48464 command_runner.go:130] > # ctr_stop_timeout = 30
	I0919 20:06:38.349701   48464 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0919 20:06:38.349716   48464 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0919 20:06:38.349725   48464 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0919 20:06:38.349738   48464 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0919 20:06:38.349744   48464 command_runner.go:130] > drop_infra_ctr = false
	I0919 20:06:38.349750   48464 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0919 20:06:38.349758   48464 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0919 20:06:38.349769   48464 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0919 20:06:38.349777   48464 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0919 20:06:38.349787   48464 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0919 20:06:38.349793   48464 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0919 20:06:38.349798   48464 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0919 20:06:38.349805   48464 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0919 20:06:38.349809   48464 command_runner.go:130] > # shared_cpuset = ""
	I0919 20:06:38.349814   48464 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0919 20:06:38.349819   48464 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0919 20:06:38.349825   48464 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0919 20:06:38.349833   48464 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0919 20:06:38.349841   48464 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0919 20:06:38.349849   48464 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0919 20:06:38.349856   48464 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0919 20:06:38.349860   48464 command_runner.go:130] > # enable_criu_support = false
	I0919 20:06:38.349867   48464 command_runner.go:130] > # Enable/disable the generation of the container,
	I0919 20:06:38.349875   48464 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0919 20:06:38.349879   48464 command_runner.go:130] > # enable_pod_events = false
	I0919 20:06:38.349888   48464 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0919 20:06:38.349894   48464 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0919 20:06:38.349899   48464 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0919 20:06:38.349903   48464 command_runner.go:130] > # default_runtime = "runc"
	I0919 20:06:38.349910   48464 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0919 20:06:38.349917   48464 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0919 20:06:38.349931   48464 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0919 20:06:38.349939   48464 command_runner.go:130] > # creation as a file is not desired either.
	I0919 20:06:38.349951   48464 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0919 20:06:38.349955   48464 command_runner.go:130] > # the hostname is being managed dynamically.
	I0919 20:06:38.349960   48464 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0919 20:06:38.349963   48464 command_runner.go:130] > # ]
	I0919 20:06:38.349971   48464 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0919 20:06:38.349977   48464 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0919 20:06:38.349983   48464 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0919 20:06:38.349990   48464 command_runner.go:130] > # Each entry in the table should follow the format:
	I0919 20:06:38.349998   48464 command_runner.go:130] > #
	I0919 20:06:38.350002   48464 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0919 20:06:38.350007   48464 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0919 20:06:38.350052   48464 command_runner.go:130] > # runtime_type = "oci"
	I0919 20:06:38.350059   48464 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0919 20:06:38.350064   48464 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0919 20:06:38.350068   48464 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0919 20:06:38.350073   48464 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0919 20:06:38.350079   48464 command_runner.go:130] > # monitor_env = []
	I0919 20:06:38.350083   48464 command_runner.go:130] > # privileged_without_host_devices = false
	I0919 20:06:38.350088   48464 command_runner.go:130] > # allowed_annotations = []
	I0919 20:06:38.350094   48464 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0919 20:06:38.350097   48464 command_runner.go:130] > # Where:
	I0919 20:06:38.350105   48464 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0919 20:06:38.350110   48464 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0919 20:06:38.350119   48464 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0919 20:06:38.350125   48464 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0919 20:06:38.350128   48464 command_runner.go:130] > #   in $PATH.
	I0919 20:06:38.350134   48464 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0919 20:06:38.350141   48464 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0919 20:06:38.350150   48464 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0919 20:06:38.350153   48464 command_runner.go:130] > #   state.
	I0919 20:06:38.350162   48464 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0919 20:06:38.350168   48464 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0919 20:06:38.350174   48464 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0919 20:06:38.350187   48464 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0919 20:06:38.350193   48464 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0919 20:06:38.350202   48464 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0919 20:06:38.350208   48464 command_runner.go:130] > #   The currently recognized values are:
	I0919 20:06:38.350214   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0919 20:06:38.350223   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0919 20:06:38.350229   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0919 20:06:38.350235   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0919 20:06:38.350249   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0919 20:06:38.350255   48464 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0919 20:06:38.350264   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0919 20:06:38.350270   48464 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0919 20:06:38.350278   48464 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0919 20:06:38.350284   48464 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0919 20:06:38.350288   48464 command_runner.go:130] > #   deprecated option "conmon".
	I0919 20:06:38.350297   48464 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0919 20:06:38.350302   48464 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0919 20:06:38.350308   48464 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0919 20:06:38.350315   48464 command_runner.go:130] > #   should be moved to the container's cgroup
	I0919 20:06:38.350321   48464 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0919 20:06:38.350326   48464 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0919 20:06:38.350335   48464 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0919 20:06:38.350347   48464 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0919 20:06:38.350350   48464 command_runner.go:130] > #
	I0919 20:06:38.350359   48464 command_runner.go:130] > # Using the seccomp notifier feature:
	I0919 20:06:38.350363   48464 command_runner.go:130] > #
	I0919 20:06:38.350371   48464 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0919 20:06:38.350377   48464 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0919 20:06:38.350382   48464 command_runner.go:130] > #
	I0919 20:06:38.350391   48464 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0919 20:06:38.350397   48464 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0919 20:06:38.350399   48464 command_runner.go:130] > #
	I0919 20:06:38.350408   48464 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0919 20:06:38.350411   48464 command_runner.go:130] > # feature.
	I0919 20:06:38.350414   48464 command_runner.go:130] > #
	I0919 20:06:38.350422   48464 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0919 20:06:38.350430   48464 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0919 20:06:38.350437   48464 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0919 20:06:38.350443   48464 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0919 20:06:38.350451   48464 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0919 20:06:38.350454   48464 command_runner.go:130] > #
	I0919 20:06:38.350464   48464 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0919 20:06:38.350473   48464 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0919 20:06:38.350475   48464 command_runner.go:130] > #
	I0919 20:06:38.350481   48464 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0919 20:06:38.350486   48464 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0919 20:06:38.350489   48464 command_runner.go:130] > #
	I0919 20:06:38.350497   48464 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0919 20:06:38.350502   48464 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0919 20:06:38.350506   48464 command_runner.go:130] > # limitation.
	I0919 20:06:38.350512   48464 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0919 20:06:38.350516   48464 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0919 20:06:38.350520   48464 command_runner.go:130] > runtime_type = "oci"
	I0919 20:06:38.350527   48464 command_runner.go:130] > runtime_root = "/run/runc"
	I0919 20:06:38.350531   48464 command_runner.go:130] > runtime_config_path = ""
	I0919 20:06:38.350538   48464 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0919 20:06:38.350541   48464 command_runner.go:130] > monitor_cgroup = "pod"
	I0919 20:06:38.350545   48464 command_runner.go:130] > monitor_exec_cgroup = ""
	I0919 20:06:38.350549   48464 command_runner.go:130] > monitor_env = [
	I0919 20:06:38.350554   48464 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0919 20:06:38.350559   48464 command_runner.go:130] > ]
	I0919 20:06:38.350564   48464 command_runner.go:130] > privileged_without_host_devices = false
	I0919 20:06:38.350570   48464 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0919 20:06:38.350575   48464 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0919 20:06:38.350583   48464 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0919 20:06:38.350590   48464 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0919 20:06:38.350600   48464 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0919 20:06:38.350606   48464 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0919 20:06:38.350621   48464 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0919 20:06:38.350628   48464 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0919 20:06:38.350637   48464 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0919 20:06:38.350643   48464 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0919 20:06:38.350647   48464 command_runner.go:130] > # Example:
	I0919 20:06:38.350655   48464 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0919 20:06:38.350663   48464 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0919 20:06:38.350668   48464 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0919 20:06:38.350676   48464 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0919 20:06:38.350679   48464 command_runner.go:130] > # cpuset = 0
	I0919 20:06:38.350682   48464 command_runner.go:130] > # cpushares = "0-1"
	I0919 20:06:38.350686   48464 command_runner.go:130] > # Where:
	I0919 20:06:38.350690   48464 command_runner.go:130] > # The workload name is workload-type.
	I0919 20:06:38.350703   48464 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0919 20:06:38.350708   48464 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0919 20:06:38.350713   48464 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0919 20:06:38.350724   48464 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0919 20:06:38.350729   48464 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0919 20:06:38.350735   48464 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0919 20:06:38.350745   48464 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0919 20:06:38.350749   48464 command_runner.go:130] > # Default value is set to true
	I0919 20:06:38.350753   48464 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0919 20:06:38.350761   48464 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0919 20:06:38.350765   48464 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0919 20:06:38.350769   48464 command_runner.go:130] > # Default value is set to 'false'
	I0919 20:06:38.350773   48464 command_runner.go:130] > # disable_hostport_mapping = false
	I0919 20:06:38.350782   48464 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0919 20:06:38.350785   48464 command_runner.go:130] > #
	I0919 20:06:38.350790   48464 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0919 20:06:38.350802   48464 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0919 20:06:38.350808   48464 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0919 20:06:38.350816   48464 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0919 20:06:38.350829   48464 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0919 20:06:38.350833   48464 command_runner.go:130] > [crio.image]
	I0919 20:06:38.350842   48464 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0919 20:06:38.350854   48464 command_runner.go:130] > # default_transport = "docker://"
	I0919 20:06:38.350867   48464 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0919 20:06:38.350874   48464 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0919 20:06:38.350885   48464 command_runner.go:130] > # global_auth_file = ""
	I0919 20:06:38.350894   48464 command_runner.go:130] > # The image used to instantiate infra containers.
	I0919 20:06:38.350899   48464 command_runner.go:130] > # This option supports live configuration reload.
	I0919 20:06:38.350904   48464 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0919 20:06:38.350918   48464 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0919 20:06:38.350928   48464 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0919 20:06:38.350935   48464 command_runner.go:130] > # This option supports live configuration reload.
	I0919 20:06:38.350947   48464 command_runner.go:130] > # pause_image_auth_file = ""
	I0919 20:06:38.350956   48464 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0919 20:06:38.350966   48464 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0919 20:06:38.350980   48464 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0919 20:06:38.350989   48464 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0919 20:06:38.350994   48464 command_runner.go:130] > # pause_command = "/pause"
	I0919 20:06:38.351007   48464 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0919 20:06:38.351017   48464 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0919 20:06:38.351025   48464 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0919 20:06:38.351046   48464 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0919 20:06:38.351052   48464 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0919 20:06:38.351058   48464 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0919 20:06:38.351065   48464 command_runner.go:130] > # pinned_images = [
	I0919 20:06:38.351067   48464 command_runner.go:130] > # ]
	I0919 20:06:38.351073   48464 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0919 20:06:38.351092   48464 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0919 20:06:38.351101   48464 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0919 20:06:38.351107   48464 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0919 20:06:38.351115   48464 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0919 20:06:38.351118   48464 command_runner.go:130] > # signature_policy = ""
	I0919 20:06:38.351123   48464 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0919 20:06:38.351130   48464 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0919 20:06:38.351138   48464 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0919 20:06:38.351144   48464 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0919 20:06:38.351152   48464 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0919 20:06:38.351157   48464 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0919 20:06:38.351164   48464 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0919 20:06:38.351183   48464 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0919 20:06:38.351187   48464 command_runner.go:130] > # changing them here.
	I0919 20:06:38.351191   48464 command_runner.go:130] > # insecure_registries = [
	I0919 20:06:38.351194   48464 command_runner.go:130] > # ]
	I0919 20:06:38.351203   48464 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0919 20:06:38.351208   48464 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0919 20:06:38.351214   48464 command_runner.go:130] > # image_volumes = "mkdir"
	I0919 20:06:38.351223   48464 command_runner.go:130] > # Temporary directory to use for storing big files
	I0919 20:06:38.351227   48464 command_runner.go:130] > # big_files_temporary_dir = ""
	I0919 20:06:38.351236   48464 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0919 20:06:38.351239   48464 command_runner.go:130] > # CNI plugins.
	I0919 20:06:38.351243   48464 command_runner.go:130] > [crio.network]
	I0919 20:06:38.351251   48464 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0919 20:06:38.351256   48464 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0919 20:06:38.351260   48464 command_runner.go:130] > # cni_default_network = ""
	I0919 20:06:38.351265   48464 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0919 20:06:38.351272   48464 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0919 20:06:38.351277   48464 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0919 20:06:38.351280   48464 command_runner.go:130] > # plugin_dirs = [
	I0919 20:06:38.351284   48464 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0919 20:06:38.351287   48464 command_runner.go:130] > # ]
	I0919 20:06:38.351295   48464 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0919 20:06:38.351298   48464 command_runner.go:130] > [crio.metrics]
	I0919 20:06:38.351303   48464 command_runner.go:130] > # Globally enable or disable metrics support.
	I0919 20:06:38.351306   48464 command_runner.go:130] > enable_metrics = true
	I0919 20:06:38.351313   48464 command_runner.go:130] > # Specify enabled metrics collectors.
	I0919 20:06:38.351317   48464 command_runner.go:130] > # Per default all metrics are enabled.
	I0919 20:06:38.351323   48464 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0919 20:06:38.351332   48464 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0919 20:06:38.351343   48464 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0919 20:06:38.351348   48464 command_runner.go:130] > # metrics_collectors = [
	I0919 20:06:38.351351   48464 command_runner.go:130] > # 	"operations",
	I0919 20:06:38.351356   48464 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0919 20:06:38.351367   48464 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0919 20:06:38.351371   48464 command_runner.go:130] > # 	"operations_errors",
	I0919 20:06:38.351379   48464 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0919 20:06:38.351383   48464 command_runner.go:130] > # 	"image_pulls_by_name",
	I0919 20:06:38.351387   48464 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0919 20:06:38.351393   48464 command_runner.go:130] > # 	"image_pulls_failures",
	I0919 20:06:38.351397   48464 command_runner.go:130] > # 	"image_pulls_successes",
	I0919 20:06:38.351401   48464 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0919 20:06:38.351405   48464 command_runner.go:130] > # 	"image_layer_reuse",
	I0919 20:06:38.351410   48464 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0919 20:06:38.351419   48464 command_runner.go:130] > # 	"containers_oom_total",
	I0919 20:06:38.351422   48464 command_runner.go:130] > # 	"containers_oom",
	I0919 20:06:38.351426   48464 command_runner.go:130] > # 	"processes_defunct",
	I0919 20:06:38.351430   48464 command_runner.go:130] > # 	"operations_total",
	I0919 20:06:38.351434   48464 command_runner.go:130] > # 	"operations_latency_seconds",
	I0919 20:06:38.351441   48464 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0919 20:06:38.351445   48464 command_runner.go:130] > # 	"operations_errors_total",
	I0919 20:06:38.351450   48464 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0919 20:06:38.351454   48464 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0919 20:06:38.351458   48464 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0919 20:06:38.351465   48464 command_runner.go:130] > # 	"image_pulls_success_total",
	I0919 20:06:38.351469   48464 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0919 20:06:38.351473   48464 command_runner.go:130] > # 	"containers_oom_count_total",
	I0919 20:06:38.351477   48464 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0919 20:06:38.351484   48464 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0919 20:06:38.351487   48464 command_runner.go:130] > # ]
	I0919 20:06:38.351492   48464 command_runner.go:130] > # The port on which the metrics server will listen.
	I0919 20:06:38.351496   48464 command_runner.go:130] > # metrics_port = 9090
	I0919 20:06:38.351500   48464 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0919 20:06:38.351507   48464 command_runner.go:130] > # metrics_socket = ""
	I0919 20:06:38.351512   48464 command_runner.go:130] > # The certificate for the secure metrics server.
	I0919 20:06:38.351518   48464 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0919 20:06:38.351530   48464 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0919 20:06:38.351542   48464 command_runner.go:130] > # certificate on any modification event.
	I0919 20:06:38.351546   48464 command_runner.go:130] > # metrics_cert = ""
	I0919 20:06:38.351551   48464 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0919 20:06:38.351559   48464 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0919 20:06:38.351562   48464 command_runner.go:130] > # metrics_key = ""
	I0919 20:06:38.351568   48464 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0919 20:06:38.351571   48464 command_runner.go:130] > [crio.tracing]
	I0919 20:06:38.351579   48464 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0919 20:06:38.351583   48464 command_runner.go:130] > # enable_tracing = false
	I0919 20:06:38.351587   48464 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0919 20:06:38.351591   48464 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0919 20:06:38.351600   48464 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0919 20:06:38.351605   48464 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0919 20:06:38.351608   48464 command_runner.go:130] > # CRI-O NRI configuration.
	I0919 20:06:38.351612   48464 command_runner.go:130] > [crio.nri]
	I0919 20:06:38.351616   48464 command_runner.go:130] > # Globally enable or disable NRI.
	I0919 20:06:38.351622   48464 command_runner.go:130] > # enable_nri = false
	I0919 20:06:38.351626   48464 command_runner.go:130] > # NRI socket to listen on.
	I0919 20:06:38.351630   48464 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0919 20:06:38.351635   48464 command_runner.go:130] > # NRI plugin directory to use.
	I0919 20:06:38.351642   48464 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0919 20:06:38.351648   48464 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0919 20:06:38.351653   48464 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0919 20:06:38.351658   48464 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0919 20:06:38.351665   48464 command_runner.go:130] > # nri_disable_connections = false
	I0919 20:06:38.351670   48464 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0919 20:06:38.351674   48464 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0919 20:06:38.351680   48464 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0919 20:06:38.351686   48464 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0919 20:06:38.351692   48464 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0919 20:06:38.351695   48464 command_runner.go:130] > [crio.stats]
	I0919 20:06:38.351701   48464 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0919 20:06:38.351713   48464 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0919 20:06:38.351721   48464 command_runner.go:130] > # stats_collection_period = 0
	I0919 20:06:38.351748   48464 command_runner.go:130] ! time="2024-09-19 20:06:38.301176812Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0919 20:06:38.351763   48464 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0919 20:06:38.351852   48464 cni.go:84] Creating CNI manager for ""
	I0919 20:06:38.351859   48464 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 20:06:38.351881   48464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 20:06:38.351904   48464 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-282812 NodeName:multinode-282812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 20:06:38.352124   48464 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-282812"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 20:06:38.352194   48464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 20:06:38.362779   48464 command_runner.go:130] > kubeadm
	I0919 20:06:38.362796   48464 command_runner.go:130] > kubectl
	I0919 20:06:38.362802   48464 command_runner.go:130] > kubelet
	I0919 20:06:38.362833   48464 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 20:06:38.362883   48464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 20:06:38.373151   48464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0919 20:06:38.390286   48464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 20:06:38.407152   48464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0919 20:06:38.423759   48464 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0919 20:06:38.427552   48464 command_runner.go:130] > 192.168.39.87	control-plane.minikube.internal
	I0919 20:06:38.427622   48464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:06:38.569089   48464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 20:06:38.584494   48464 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812 for IP: 192.168.39.87
	I0919 20:06:38.584521   48464 certs.go:194] generating shared ca certs ...
	I0919 20:06:38.584543   48464 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 20:06:38.584720   48464 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 20:06:38.584778   48464 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 20:06:38.584793   48464 certs.go:256] generating profile certs ...
	I0919 20:06:38.584890   48464 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/client.key
	I0919 20:06:38.584958   48464 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/apiserver.key.ec5d7b66
	I0919 20:06:38.585014   48464 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/proxy-client.key
	I0919 20:06:38.585025   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 20:06:38.585044   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 20:06:38.585058   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 20:06:38.585093   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 20:06:38.585111   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 20:06:38.585129   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 20:06:38.585146   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 20:06:38.585159   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 20:06:38.585209   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 20:06:38.585236   48464 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 20:06:38.585244   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 20:06:38.585266   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 20:06:38.585288   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 20:06:38.585309   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 20:06:38.585346   48464 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 20:06:38.585372   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> /usr/share/ca-certificates/151162.pem
	I0919 20:06:38.585388   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:06:38.585406   48464 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem -> /usr/share/ca-certificates/15116.pem
	I0919 20:06:38.586025   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 20:06:38.610081   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 20:06:38.633213   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 20:06:38.656601   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 20:06:38.680846   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 20:06:38.704331   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 20:06:38.753090   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 20:06:38.807615   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/multinode-282812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 20:06:38.859136   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 20:06:38.892564   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 20:06:38.921445   48464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 20:06:38.959906   48464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 20:06:38.988225   48464 ssh_runner.go:195] Run: openssl version
	I0919 20:06:39.001632   48464 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0919 20:06:39.002151   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 20:06:39.015700   48464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 20:06:39.030663   48464 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 20:06:39.030698   48464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 20:06:39.030751   48464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 20:06:39.038219   48464 command_runner.go:130] > 51391683
	I0919 20:06:39.038341   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 20:06:39.049239   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 20:06:39.060766   48464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 20:06:39.065318   48464 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 20:06:39.065354   48464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 20:06:39.065417   48464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 20:06:39.071115   48464 command_runner.go:130] > 3ec20f2e
	I0919 20:06:39.071173   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 20:06:39.081917   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 20:06:39.093022   48464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:06:39.097423   48464 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:06:39.097547   48464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:06:39.097620   48464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:06:39.103185   48464 command_runner.go:130] > b5213941
	I0919 20:06:39.103254   48464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 20:06:39.113141   48464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 20:06:39.117806   48464 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 20:06:39.117837   48464 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0919 20:06:39.117846   48464 command_runner.go:130] > Device: 253,1	Inode: 3148840     Links: 1
	I0919 20:06:39.117855   48464 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 20:06:39.117869   48464 command_runner.go:130] > Access: 2024-09-19 19:59:55.109042311 +0000
	I0919 20:06:39.117877   48464 command_runner.go:130] > Modify: 2024-09-19 19:59:55.109042311 +0000
	I0919 20:06:39.117887   48464 command_runner.go:130] > Change: 2024-09-19 19:59:55.109042311 +0000
	I0919 20:06:39.117897   48464 command_runner.go:130] >  Birth: 2024-09-19 19:59:55.109042311 +0000
	I0919 20:06:39.118035   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 20:06:39.123688   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.123868   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 20:06:39.129325   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.129490   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 20:06:39.135090   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.135165   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 20:06:39.140436   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.140599   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 20:06:39.146097   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.146244   48464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 20:06:39.151734   48464 command_runner.go:130] > Certificate will not expire
	I0919 20:06:39.151801   48464 kubeadm.go:392] StartCluster: {Name:multinode-282812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-282812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.238 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:06:39.151924   48464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 20:06:39.151975   48464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 20:06:39.194077   48464 command_runner.go:130] > d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63
	I0919 20:06:39.194108   48464 command_runner.go:130] > b79f6dfa534789a6ecc5defa51edfc1de4dd7718b5ccb224413219ca33cfce07
	I0919 20:06:39.194114   48464 command_runner.go:130] > 8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89
	I0919 20:06:39.194146   48464 command_runner.go:130] > 45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b
	I0919 20:06:39.194152   48464 command_runner.go:130] > e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b
	I0919 20:06:39.194159   48464 command_runner.go:130] > fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f
	I0919 20:06:39.194168   48464 command_runner.go:130] > dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728
	I0919 20:06:39.194180   48464 command_runner.go:130] > 625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af
	I0919 20:06:39.194190   48464 command_runner.go:130] > 65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6
	I0919 20:06:39.194214   48464 cri.go:89] found id: "d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63"
	I0919 20:06:39.194225   48464 cri.go:89] found id: "b79f6dfa534789a6ecc5defa51edfc1de4dd7718b5ccb224413219ca33cfce07"
	I0919 20:06:39.194229   48464 cri.go:89] found id: "8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89"
	I0919 20:06:39.194232   48464 cri.go:89] found id: "45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b"
	I0919 20:06:39.194235   48464 cri.go:89] found id: "e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b"
	I0919 20:06:39.194238   48464 cri.go:89] found id: "fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f"
	I0919 20:06:39.194243   48464 cri.go:89] found id: "dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728"
	I0919 20:06:39.194246   48464 cri.go:89] found id: "625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af"
	I0919 20:06:39.194248   48464 cri.go:89] found id: "65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6"
	I0919 20:06:39.194255   48464 cri.go:89] found id: ""
	I0919 20:06:39.194306   48464 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.231714410Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776654231694587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fa3f50d-6832-4075-b5bf-66aa4a500879 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.232476502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4874cf4c-ab02-4fd1-adca-e2f9aec3cebf name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.232530501Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4874cf4c-ab02-4fd1-adca-e2f9aec3cebf name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.232846961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1d7bd2097da7969101364bded34cb941ec63d5c8d335186fe1c3e2f5ee653a,PodSandboxId:398153f70f0c640ecd20410e84e6ae1981468353b5d5324e3d740298ade9168a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726776435823242959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87309d2462fc4f7dfa4a9c5baf53f6a205cce9e51b2069bf554d905b50062ee6,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726776411307311389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdced286d5a6a15cfa4737af7cedc044f1b5f2176b096eb0c558979e58d05bdb,PodSandboxId:9fd4a69759df5b3764e69d2e95c8294bdd52c02e76c0409791d4dd20de44b5d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726776402374414164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b813
5-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91b2b6716ecb009c297b64b6e3a197b2b1ccfb373808d9960b1b97761172f09,PodSandboxId:8e9ee218230cf1f2e8fd6ddace0a167e8fbc169c31abdab80004d3273e8af707,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726776402538704736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},An
notations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3df28a477bc9b3a710219db447412f3bffc1d630456b14fc6bd107bbea44c55,PodSandboxId:224f00c2f20983646cdcd50553060fd16a1912e4b8adb12b7ffae222a15d50ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726776402385443300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85476d7e8d2b82a4dc3231d06dcca93f418d33c58c1a55f9da28344d912aac0a,PodSandboxId:e1f00caf995deb572d0e41e94b279b718b391caf87482f4e853c2e6685ed3f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726776402298290825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io.kubernetes.container.hash:
12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30431653e0e43ed529bb73220f39ab0fe58f2228aca51af2005a98e730ee5eca,PodSandboxId:8274e970fa8a4796eedb588ea33c1b8fcc0db0f9f1cd7bcd1a723893b17d126f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726776402253040111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69bde3b12f7d33021d4a5b784e9a8355feb38ad0f68cc72f6ce0e95f8090386d,PodSandboxId:d97cc1b9bfb7e3b6554a11e8a99779d72501c9ed2627a4f653afe5e63678f046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726776402202701308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.container.hash: 7
df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f0a92696c2dd48ea17d23a80293b334aafee2af059bc2b881cc64a2250c13a,PodSandboxId:3d7c4d3431ba405cc382d772533b7f690de776d4e0118efaa2c04205df266838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726776402095959832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726776398920591256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08cd67d64187e006994bc65839810a122496131699388a5379e209bf1e1b614,PodSandboxId:111fee9576f330deeea7b39a27ba3438989137455f74de336149bc60f6df7990,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726776081068852145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89,PodSandboxId:79a63ce099f45bd7977e3ae258f8d8cea024ad943b1d108eff7f159926dd7238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726776022932528561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: c56b8135-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b,PodSandboxId:d3fa20aed888f943be3030a650c6f710139632afbef3097461659d701298c3b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726776010913361485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b,PodSandboxId:c1a37209beb6fc5e334ca94bc59827bb5253e7859d94ad3dec33e37d856d5624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726776010850418703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31
-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f,PodSandboxId:f4592b7fce465589a0e1c51c95be50805f1129af964d1983dd06209bb65420bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726775998888676408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728,PodSandboxId:fe5f49b8d407d041f4cf9d974d854cde52e888e76ac3f66f5ed4cf54b1ca8111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775998866054027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af,PodSandboxId:7a92ed2f7d51e6a7e5b571faee81752a0b526c7420aa32e49252b63d2b7682aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726775998833399338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6,PodSandboxId:00405c53af3a27930cbdadb4a4ba8c44fd9334f2d2c6c21e4771f1de907b9c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775998804230790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4874cf4c-ab02-4fd1-adca-e2f9aec3cebf name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.273323131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c4b53d2-bdaf-4fe9-9459-79c73b16b488 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.273658806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c4b53d2-bdaf-4fe9-9459-79c73b16b488 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.275218003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f89efc77-0057-41ac-a4da-58fbd2492e2f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.275604445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776654275580935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f89efc77-0057-41ac-a4da-58fbd2492e2f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.276289801Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b64af485-d40d-435e-ba6f-8d14150ad859 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.276342909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b64af485-d40d-435e-ba6f-8d14150ad859 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.276734403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1d7bd2097da7969101364bded34cb941ec63d5c8d335186fe1c3e2f5ee653a,PodSandboxId:398153f70f0c640ecd20410e84e6ae1981468353b5d5324e3d740298ade9168a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726776435823242959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87309d2462fc4f7dfa4a9c5baf53f6a205cce9e51b2069bf554d905b50062ee6,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726776411307311389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdced286d5a6a15cfa4737af7cedc044f1b5f2176b096eb0c558979e58d05bdb,PodSandboxId:9fd4a69759df5b3764e69d2e95c8294bdd52c02e76c0409791d4dd20de44b5d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726776402374414164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b813
5-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91b2b6716ecb009c297b64b6e3a197b2b1ccfb373808d9960b1b97761172f09,PodSandboxId:8e9ee218230cf1f2e8fd6ddace0a167e8fbc169c31abdab80004d3273e8af707,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726776402538704736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},An
notations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3df28a477bc9b3a710219db447412f3bffc1d630456b14fc6bd107bbea44c55,PodSandboxId:224f00c2f20983646cdcd50553060fd16a1912e4b8adb12b7ffae222a15d50ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726776402385443300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85476d7e8d2b82a4dc3231d06dcca93f418d33c58c1a55f9da28344d912aac0a,PodSandboxId:e1f00caf995deb572d0e41e94b279b718b391caf87482f4e853c2e6685ed3f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726776402298290825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io.kubernetes.container.hash:
12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30431653e0e43ed529bb73220f39ab0fe58f2228aca51af2005a98e730ee5eca,PodSandboxId:8274e970fa8a4796eedb588ea33c1b8fcc0db0f9f1cd7bcd1a723893b17d126f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726776402253040111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69bde3b12f7d33021d4a5b784e9a8355feb38ad0f68cc72f6ce0e95f8090386d,PodSandboxId:d97cc1b9bfb7e3b6554a11e8a99779d72501c9ed2627a4f653afe5e63678f046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726776402202701308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.container.hash: 7
df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f0a92696c2dd48ea17d23a80293b334aafee2af059bc2b881cc64a2250c13a,PodSandboxId:3d7c4d3431ba405cc382d772533b7f690de776d4e0118efaa2c04205df266838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726776402095959832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726776398920591256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08cd67d64187e006994bc65839810a122496131699388a5379e209bf1e1b614,PodSandboxId:111fee9576f330deeea7b39a27ba3438989137455f74de336149bc60f6df7990,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726776081068852145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89,PodSandboxId:79a63ce099f45bd7977e3ae258f8d8cea024ad943b1d108eff7f159926dd7238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726776022932528561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: c56b8135-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b,PodSandboxId:d3fa20aed888f943be3030a650c6f710139632afbef3097461659d701298c3b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726776010913361485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b,PodSandboxId:c1a37209beb6fc5e334ca94bc59827bb5253e7859d94ad3dec33e37d856d5624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726776010850418703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31
-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f,PodSandboxId:f4592b7fce465589a0e1c51c95be50805f1129af964d1983dd06209bb65420bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726775998888676408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728,PodSandboxId:fe5f49b8d407d041f4cf9d974d854cde52e888e76ac3f66f5ed4cf54b1ca8111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775998866054027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af,PodSandboxId:7a92ed2f7d51e6a7e5b571faee81752a0b526c7420aa32e49252b63d2b7682aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726775998833399338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6,PodSandboxId:00405c53af3a27930cbdadb4a4ba8c44fd9334f2d2c6c21e4771f1de907b9c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775998804230790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b64af485-d40d-435e-ba6f-8d14150ad859 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.326951559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aac60713-a68c-4fa0-8b79-9da97ff12888 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.327022184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aac60713-a68c-4fa0-8b79-9da97ff12888 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.328373068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b32fe62a-a5f6-495d-a183-15c7ed061c4c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.328942680Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776654328917256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b32fe62a-a5f6-495d-a183-15c7ed061c4c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.329814266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19c41b9c-4706-417c-b6ad-7862df93a3c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.329890165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19c41b9c-4706-417c-b6ad-7862df93a3c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.332790874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1d7bd2097da7969101364bded34cb941ec63d5c8d335186fe1c3e2f5ee653a,PodSandboxId:398153f70f0c640ecd20410e84e6ae1981468353b5d5324e3d740298ade9168a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726776435823242959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87309d2462fc4f7dfa4a9c5baf53f6a205cce9e51b2069bf554d905b50062ee6,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726776411307311389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdced286d5a6a15cfa4737af7cedc044f1b5f2176b096eb0c558979e58d05bdb,PodSandboxId:9fd4a69759df5b3764e69d2e95c8294bdd52c02e76c0409791d4dd20de44b5d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726776402374414164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b813
5-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91b2b6716ecb009c297b64b6e3a197b2b1ccfb373808d9960b1b97761172f09,PodSandboxId:8e9ee218230cf1f2e8fd6ddace0a167e8fbc169c31abdab80004d3273e8af707,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726776402538704736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},An
notations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3df28a477bc9b3a710219db447412f3bffc1d630456b14fc6bd107bbea44c55,PodSandboxId:224f00c2f20983646cdcd50553060fd16a1912e4b8adb12b7ffae222a15d50ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726776402385443300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85476d7e8d2b82a4dc3231d06dcca93f418d33c58c1a55f9da28344d912aac0a,PodSandboxId:e1f00caf995deb572d0e41e94b279b718b391caf87482f4e853c2e6685ed3f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726776402298290825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io.kubernetes.container.hash:
12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30431653e0e43ed529bb73220f39ab0fe58f2228aca51af2005a98e730ee5eca,PodSandboxId:8274e970fa8a4796eedb588ea33c1b8fcc0db0f9f1cd7bcd1a723893b17d126f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726776402253040111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69bde3b12f7d33021d4a5b784e9a8355feb38ad0f68cc72f6ce0e95f8090386d,PodSandboxId:d97cc1b9bfb7e3b6554a11e8a99779d72501c9ed2627a4f653afe5e63678f046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726776402202701308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.container.hash: 7
df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f0a92696c2dd48ea17d23a80293b334aafee2af059bc2b881cc64a2250c13a,PodSandboxId:3d7c4d3431ba405cc382d772533b7f690de776d4e0118efaa2c04205df266838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726776402095959832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726776398920591256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08cd67d64187e006994bc65839810a122496131699388a5379e209bf1e1b614,PodSandboxId:111fee9576f330deeea7b39a27ba3438989137455f74de336149bc60f6df7990,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726776081068852145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89,PodSandboxId:79a63ce099f45bd7977e3ae258f8d8cea024ad943b1d108eff7f159926dd7238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726776022932528561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: c56b8135-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b,PodSandboxId:d3fa20aed888f943be3030a650c6f710139632afbef3097461659d701298c3b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726776010913361485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b,PodSandboxId:c1a37209beb6fc5e334ca94bc59827bb5253e7859d94ad3dec33e37d856d5624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726776010850418703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31
-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f,PodSandboxId:f4592b7fce465589a0e1c51c95be50805f1129af964d1983dd06209bb65420bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726775998888676408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728,PodSandboxId:fe5f49b8d407d041f4cf9d974d854cde52e888e76ac3f66f5ed4cf54b1ca8111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775998866054027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af,PodSandboxId:7a92ed2f7d51e6a7e5b571faee81752a0b526c7420aa32e49252b63d2b7682aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726775998833399338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6,PodSandboxId:00405c53af3a27930cbdadb4a4ba8c44fd9334f2d2c6c21e4771f1de907b9c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775998804230790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19c41b9c-4706-417c-b6ad-7862df93a3c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.379380289Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abe58afa-6af8-41e2-9bd3-e5462e3f6a11 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.379451933Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abe58afa-6af8-41e2-9bd3-e5462e3f6a11 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.380487406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c903b73-5880-4387-9e32-702d794ccc8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.380845137Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776654380824396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c903b73-5880-4387-9e32-702d794ccc8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.382030475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4af49dd-b527-41a3-9f21-d72c45972cd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.382270221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4af49dd-b527-41a3-9f21-d72c45972cd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:10:54 multinode-282812 crio[2723]: time="2024-09-19 20:10:54.382599441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1d7bd2097da7969101364bded34cb941ec63d5c8d335186fe1c3e2f5ee653a,PodSandboxId:398153f70f0c640ecd20410e84e6ae1981468353b5d5324e3d740298ade9168a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726776435823242959,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87309d2462fc4f7dfa4a9c5baf53f6a205cce9e51b2069bf554d905b50062ee6,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726776411307311389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdced286d5a6a15cfa4737af7cedc044f1b5f2176b096eb0c558979e58d05bdb,PodSandboxId:9fd4a69759df5b3764e69d2e95c8294bdd52c02e76c0409791d4dd20de44b5d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726776402374414164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b813
5-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91b2b6716ecb009c297b64b6e3a197b2b1ccfb373808d9960b1b97761172f09,PodSandboxId:8e9ee218230cf1f2e8fd6ddace0a167e8fbc169c31abdab80004d3273e8af707,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726776402538704736,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},An
notations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3df28a477bc9b3a710219db447412f3bffc1d630456b14fc6bd107bbea44c55,PodSandboxId:224f00c2f20983646cdcd50553060fd16a1912e4b8adb12b7ffae222a15d50ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726776402385443300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85476d7e8d2b82a4dc3231d06dcca93f418d33c58c1a55f9da28344d912aac0a,PodSandboxId:e1f00caf995deb572d0e41e94b279b718b391caf87482f4e853c2e6685ed3f8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726776402298290825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io.kubernetes.container.hash:
12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30431653e0e43ed529bb73220f39ab0fe58f2228aca51af2005a98e730ee5eca,PodSandboxId:8274e970fa8a4796eedb588ea33c1b8fcc0db0f9f1cd7bcd1a723893b17d126f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726776402253040111,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69bde3b12f7d33021d4a5b784e9a8355feb38ad0f68cc72f6ce0e95f8090386d,PodSandboxId:d97cc1b9bfb7e3b6554a11e8a99779d72501c9ed2627a4f653afe5e63678f046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726776402202701308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.container.hash: 7
df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f0a92696c2dd48ea17d23a80293b334aafee2af059bc2b881cc64a2250c13a,PodSandboxId:3d7c4d3431ba405cc382d772533b7f690de776d4e0118efaa2c04205df266838,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726776402095959832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63,PodSandboxId:96b8f5b47395dacefff4e58bd4415e4a7d2f629a01ad65a41e5540476edfbdfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726776398920591256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7p947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b24410a-0b22-46ea-b44e-c23dc66b228b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08cd67d64187e006994bc65839810a122496131699388a5379e209bf1e1b614,PodSandboxId:111fee9576f330deeea7b39a27ba3438989137455f74de336149bc60f6df7990,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726776081068852145,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-mmwbs,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcce5e39-ccdd-459d-832e-f827c64e7d06,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a226c55e3f797325644b887e6d392a86b8dd2652d43ecb2d9944e9b1d815b89,PodSandboxId:79a63ce099f45bd7977e3ae258f8d8cea024ad943b1d108eff7f159926dd7238,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726776022932528561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: c56b8135-7e04-4c2a-ab3e-f3d05774cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b,PodSandboxId:d3fa20aed888f943be3030a650c6f710139632afbef3097461659d701298c3b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726776010913361485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z66g5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: f2e16a09-ea87-4b3a-bca9-da6842b291e8,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b,PodSandboxId:c1a37209beb6fc5e334ca94bc59827bb5253e7859d94ad3dec33e37d856d5624,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726776010850418703,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gckr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559b255a-529d-40e4-bb31
-94ae224f5810,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f,PodSandboxId:f4592b7fce465589a0e1c51c95be50805f1129af964d1983dd06209bb65420bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726775998888676408,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93982156525eed78b3970b7fa8c87333,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728,PodSandboxId:fe5f49b8d407d041f4cf9d974d854cde52e888e76ac3f66f5ed4cf54b1ca8111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726775998866054027,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b84447e8e624b2218e517d85c606c2e,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af,PodSandboxId:7a92ed2f7d51e6a7e5b571faee81752a0b526c7420aa32e49252b63d2b7682aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726775998833399338,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85acb152b0f90de7dd310c0b4cf89f9,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6,PodSandboxId:00405c53af3a27930cbdadb4a4ba8c44fd9334f2d2c6c21e4771f1de907b9c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726775998804230790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-282812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b2401166902afd8cf1d3a7493fb9890,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4af49dd-b527-41a3-9f21-d72c45972cd2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb1d7bd2097da       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   398153f70f0c6       busybox-7dff88458-mmwbs
	87309d2462fc4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   2                   96b8f5b47395d       coredns-7c65d6cfc9-7p947
	d91b2b6716ecb       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   8e9ee218230cf       kindnet-z66g5
	b3df28a477bc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   224f00c2f2098       etcd-multinode-282812
	fdced286d5a6a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   9fd4a69759df5       storage-provisioner
	85476d7e8d2b8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   e1f00caf995de       kube-scheduler-multinode-282812
	30431653e0e43       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   8274e970fa8a4       kube-controller-manager-multinode-282812
	69bde3b12f7d3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   d97cc1b9bfb7e       kube-apiserver-multinode-282812
	15f0a92696c2d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   3d7c4d3431ba4       kube-proxy-gckr9
	d1a67d9740309       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Exited              coredns                   1                   96b8f5b47395d       coredns-7c65d6cfc9-7p947
	f08cd67d64187       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   111fee9576f33       busybox-7dff88458-mmwbs
	8a226c55e3f79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   79a63ce099f45       storage-provisioner
	45527d61634e0       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   d3fa20aed888f       kindnet-z66g5
	e4f064262cf36       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   c1a37209beb6f       kube-proxy-gckr9
	fb7cd7e02ae6b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   f4592b7fce465       etcd-multinode-282812
	dc3ea0d6f2bb7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   fe5f49b8d407d       kube-controller-manager-multinode-282812
	625d2fcd75cad       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   7a92ed2f7d51e       kube-scheduler-multinode-282812
	65a25f681cf69       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   00405c53af3a2       kube-apiserver-multinode-282812
	
	
	==> coredns [87309d2462fc4f7dfa4a9c5baf53f6a205cce9e51b2069bf554d905b50062ee6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57245 - 58410 "HINFO IN 71375068057553640.5908403203819485535. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.014069259s
	
	
	==> coredns [d1a67d974030935f49e25926cd8fbdd55af4d656df9e6ebcd1ce122830c03f63] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:48969 - 15932 "HINFO IN 6371373735206316795.7861135580671048157. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018746069s
	
	
	==> describe nodes <==
	Name:               multinode-282812
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-282812
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=multinode-282812
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T20_00_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 20:00:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-282812
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 20:10:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 20:06:51 +0000   Thu, 19 Sep 2024 19:59:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 20:06:51 +0000   Thu, 19 Sep 2024 19:59:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 20:06:51 +0000   Thu, 19 Sep 2024 19:59:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 20:06:51 +0000   Thu, 19 Sep 2024 20:00:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    multinode-282812
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5cf43698a42a4ce48b0c060c07aadae3
	  System UUID:                5cf43698-a42a-4ce4-8b0c-060c07aadae3
	  Boot ID:                    853f9e82-c4a8-4f86-acd0-9c089477abdb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mmwbs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 coredns-7c65d6cfc9-7p947                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-282812                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-z66g5                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-282812             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-282812    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-gckr9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-282812             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m8s                   kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node multinode-282812 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node multinode-282812 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node multinode-282812 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node multinode-282812 event: Registered Node multinode-282812 in Controller
	  Normal   NodeReady                10m                    kubelet          Node multinode-282812 status is now: NodeReady
	  Warning  ContainerGCFailed        4m50s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             4m22s (x6 over 5m13s)  kubelet          Node multinode-282812 status is now: NodeNotReady
	  Normal   RegisteredNode           4m5s                   node-controller  Node multinode-282812 event: Registered Node multinode-282812 in Controller
	  Normal   Starting                 4m4s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m3s                   kubelet          Node multinode-282812 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m3s                   kubelet          Node multinode-282812 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m3s                   kubelet          Node multinode-282812 status is now: NodeHasSufficientPID
	
	
	Name:               multinode-282812-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-282812-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=multinode-282812
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T20_07_27_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 20:07:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-282812-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 20:08:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 19 Sep 2024 20:07:57 +0000   Thu, 19 Sep 2024 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 19 Sep 2024 20:07:57 +0000   Thu, 19 Sep 2024 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 19 Sep 2024 20:07:57 +0000   Thu, 19 Sep 2024 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 19 Sep 2024 20:07:57 +0000   Thu, 19 Sep 2024 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    multinode-282812-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 62c948409d25441e8e056ca589512803
	  System UUID:                62c94840-9d25-441e-8e05-6ca589512803
	  Boot ID:                    bd57a503-e00a-4e1d-b9cf-b0757a95652e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l8hqk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kindnet-stjkn              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-pbj4d           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m23s                  kube-proxy       
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-282812-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-282812-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-282812-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m39s                  kubelet          Node multinode-282812-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m27s (x2 over 3m27s)  kubelet          Node multinode-282812-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x2 over 3m27s)  kubelet          Node multinode-282812-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x2 over 3m27s)  kubelet          Node multinode-282812-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m8s                   kubelet          Node multinode-282812-m02 status is now: NodeReady
	  Normal  NodeNotReady             105s                   node-controller  Node multinode-282812-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058293] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.177391] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.127970] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.254722] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.838140] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +4.070663] systemd-fstab-generator[870]: Ignoring "noauto" option for root device
	[  +0.056166] kauditd_printk_skb: 158 callbacks suppressed
	[Sep19 20:00] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.091101] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.179185] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.110556] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.293977] kauditd_printk_skb: 69 callbacks suppressed
	[Sep19 20:01] kauditd_printk_skb: 12 callbacks suppressed
	[Sep19 20:06] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.140037] systemd-fstab-generator[2659]: Ignoring "noauto" option for root device
	[  +0.171293] systemd-fstab-generator[2673]: Ignoring "noauto" option for root device
	[  +0.137697] systemd-fstab-generator[2685]: Ignoring "noauto" option for root device
	[  +0.277525] systemd-fstab-generator[2713]: Ignoring "noauto" option for root device
	[  +0.672603] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +3.712910] kauditd_printk_skb: 152 callbacks suppressed
	[  +7.124274] kauditd_printk_skb: 42 callbacks suppressed
	[  +1.233395] systemd-fstab-generator[3686]: Ignoring "noauto" option for root device
	[  +4.111486] kauditd_printk_skb: 21 callbacks suppressed
	[Sep19 20:07] systemd-fstab-generator[3859]: Ignoring "noauto" option for root device
	[ +13.196838] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [b3df28a477bc9b3a710219db447412f3bffc1d630456b14fc6bd107bbea44c55] <==
	{"level":"info","ts":"2024-09-19T20:06:43.141841Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-19T20:06:43.141906Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-19T20:06:43.141917Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-19T20:06:43.142612Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T20:06:43.144811Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-09-19T20:06:43.144840Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-09-19T20:06:43.144755Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-19T20:06:43.146025Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aad771494ea7416a","initial-advertise-peer-urls":["https://192.168.39.87:2380"],"listen-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.87:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-19T20:06:43.146142Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-19T20:06:44.414562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-19T20:06:44.414680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-19T20:06:44.414729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgPreVoteResp from aad771494ea7416a at term 2"}
	{"level":"info","ts":"2024-09-19T20:06:44.414777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became candidate at term 3"}
	{"level":"info","ts":"2024-09-19T20:06:44.414801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgVoteResp from aad771494ea7416a at term 3"}
	{"level":"info","ts":"2024-09-19T20:06:44.414827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became leader at term 3"}
	{"level":"info","ts":"2024-09-19T20:06:44.414853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aad771494ea7416a elected leader aad771494ea7416a at term 3"}
	{"level":"info","ts":"2024-09-19T20:06:44.417528Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aad771494ea7416a","local-member-attributes":"{Name:multinode-282812 ClientURLs:[https://192.168.39.87:2379]}","request-path":"/0/members/aad771494ea7416a/attributes","cluster-id":"8794d44e1d88e05d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T20:06:44.417618Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T20:06:44.417723Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T20:06:44.417766Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T20:06:44.417784Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T20:06:44.418739Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T20:06:44.418830Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T20:06:44.419601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T20:06:44.419742Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.87:2379"}
	
	
	==> etcd [fb7cd7e02ae6bc8bc271850298aac7a9081c85a98ad3401ef4893ef339cf868f] <==
	{"level":"info","ts":"2024-09-19T20:00:00.496336Z","caller":"traceutil/trace.go:171","msg":"trace[1524938224] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:1; }","duration":"141.141279ms","start":"2024-09-19T20:00:00.355179Z","end":"2024-09-19T20:00:00.496320Z","steps":["trace[1524938224] 'count revisions from in-memory index tree'  (duration: 140.970388ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:00:00.496451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.142189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-09-19T20:00:00.496511Z","caller":"traceutil/trace.go:171","msg":"trace[1779907709] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:1; }","duration":"141.209311ms","start":"2024-09-19T20:00:00.355296Z","end":"2024-09-19T20:00:00.496505Z","steps":["trace[1779907709] 'range keys from in-memory index tree'  (duration: 141.116488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:00:54.828847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.964421ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4713740539675766913 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-282812-m02.17f6bdac3c539488\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-282812-m02.17f6bdac3c539488\" value_size:646 lease:4713740539675766328 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T20:00:54.828938Z","caller":"traceutil/trace.go:171","msg":"trace[1876267680] linearizableReadLoop","detail":"{readStateIndex:457; appliedIndex:456; }","duration":"158.751071ms","start":"2024-09-19T20:00:54.670168Z","end":"2024-09-19T20:00:54.828919Z","steps":["trace[1876267680] 'read index received'  (duration: 20.996µs)","trace[1876267680] 'applied index is now lower than readState.Index'  (duration: 158.729109ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T20:00:54.829002Z","caller":"traceutil/trace.go:171","msg":"trace[622685273] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"235.686418ms","start":"2024-09-19T20:00:54.593307Z","end":"2024-09-19T20:00:54.828994Z","steps":["trace[622685273] 'process raft request'  (duration: 75.921905ms)","trace[622685273] 'compare'  (duration: 158.831945ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T20:00:54.829295Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.106441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-282812-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T20:00:54.829337Z","caller":"traceutil/trace.go:171","msg":"trace[790832056] range","detail":"{range_begin:/registry/minions/multinode-282812-m02; range_end:; response_count:0; response_revision:440; }","duration":"159.16534ms","start":"2024-09-19T20:00:54.670164Z","end":"2024-09-19T20:00:54.829329Z","steps":["trace[790832056] 'agreement among raft nodes before linearized reading'  (duration: 159.09047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:01:51.453686Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.055733ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4713740539675767432 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-282812-m03.17f6bdb96b6b3f3b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-282812-m03.17f6bdb96b6b3f3b\" value_size:646 lease:4713740539675767037 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T20:01:51.453928Z","caller":"traceutil/trace.go:171","msg":"trace[1864109658] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:605; }","duration":"167.532796ms","start":"2024-09-19T20:01:51.286378Z","end":"2024-09-19T20:01:51.453911Z","steps":["trace[1864109658] 'read index received'  (duration: 40.117116ms)","trace[1864109658] 'applied index is now lower than readState.Index'  (duration: 127.415002ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T20:01:51.454033Z","caller":"traceutil/trace.go:171","msg":"trace[943548277] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"219.564162ms","start":"2024-09-19T20:01:51.234454Z","end":"2024-09-19T20:01:51.454018Z","steps":["trace[943548277] 'process raft request'  (duration: 92.083344ms)","trace[943548277] 'compare'  (duration: 126.956601ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T20:01:51.454459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.866099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-282812-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T20:01:51.454769Z","caller":"traceutil/trace.go:171","msg":"trace[441130642] range","detail":"{range_begin:/registry/minions/multinode-282812-m03; range_end:; response_count:0; response_revision:576; }","duration":"168.336367ms","start":"2024-09-19T20:01:51.286374Z","end":"2024-09-19T20:01:51.454711Z","steps":["trace[441130642] 'agreement among raft nodes before linearized reading'  (duration: 167.645982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:01:51.455513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.529244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-19T20:01:51.455645Z","caller":"traceutil/trace.go:171","msg":"trace[129924880] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:576; }","duration":"128.704509ms","start":"2024-09-19T20:01:51.326916Z","end":"2024-09-19T20:01:51.455620Z","steps":["trace[129924880] 'agreement among raft nodes before linearized reading'  (duration: 128.329092ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T20:05:05.914802Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-19T20:05:05.914938Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-282812","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"]}
	{"level":"warn","ts":"2024-09-19T20:05:05.915081Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:05:05.915258Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:05:05.958018Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.87:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:05:05.958088Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.87:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-19T20:05:05.960630Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aad771494ea7416a","current-leader-member-id":"aad771494ea7416a"}
	{"level":"info","ts":"2024-09-19T20:05:05.965578Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-09-19T20:05:05.965757Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-09-19T20:05:05.965798Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-282812","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"]}
	
	
	==> kernel <==
	 20:10:54 up 11 min,  0 users,  load average: 0.30, 0.24, 0.15
	Linux multinode-282812 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [45527d61634e0609c9d4510b7461a9ce2924d3bf99955f37f833453ac768408b] <==
	I0919 20:04:21.879825       1 main.go:299] handling current node
	I0919 20:04:31.879271       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:04:31.879451       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:04:31.879659       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:04:31.879685       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.5.0/24] 
	I0919 20:04:31.879748       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:04:31.879766       1 main.go:299] handling current node
	I0919 20:04:41.870816       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:04:41.871036       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:04:41.871258       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:04:41.871288       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.5.0/24] 
	I0919 20:04:41.871405       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:04:41.871426       1 main.go:299] handling current node
	I0919 20:04:51.879978       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:04:51.880077       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.5.0/24] 
	I0919 20:04:51.880284       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:04:51.880314       1 main.go:299] handling current node
	I0919 20:04:51.880338       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:04:51.880353       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:05:01.876436       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0919 20:05:01.876484       1 main.go:322] Node multinode-282812-m03 has CIDR [10.244.5.0/24] 
	I0919 20:05:01.876616       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:05:01.876736       1 main.go:299] handling current node
	I0919 20:05:01.876880       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:05:01.877046       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [d91b2b6716ecb009c297b64b6e3a197b2b1ccfb373808d9960b1b97761172f09] <==
	I0919 20:09:53.545031       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:10:03.546048       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:10:03.546217       1 main.go:299] handling current node
	I0919 20:10:03.546234       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:10:03.546243       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:10:13.547057       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:10:13.547226       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:10:13.547394       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:10:13.547418       1 main.go:299] handling current node
	I0919 20:10:23.544274       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:10:23.544403       1 main.go:299] handling current node
	I0919 20:10:23.544443       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:10:23.544461       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:10:33.547138       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:10:33.547214       1 main.go:299] handling current node
	I0919 20:10:33.547263       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:10:33.547271       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:10:43.538236       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:10:43.538566       1 main.go:299] handling current node
	I0919 20:10:43.538629       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:10:43.538651       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	I0919 20:10:53.540684       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0919 20:10:53.540792       1 main.go:299] handling current node
	I0919 20:10:53.540821       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0919 20:10:53.540840       1 main.go:322] Node multinode-282812-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [65a25f681cf693c7b5e90ad773ce4fc671646822e571d41c597304afe46b90d6] <==
	I0919 20:00:02.600799       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0919 20:00:02.600838       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 20:00:03.363896       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 20:00:03.408494       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 20:00:03.506298       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0919 20:00:03.513052       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.87]
	I0919 20:00:03.514004       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 20:00:03.518010       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 20:00:03.798332       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0919 20:00:04.552402       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0919 20:00:04.567800       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 20:00:04.579878       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 20:00:09.334588       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0919 20:00:09.499206       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0919 20:01:22.206050       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41092: use of closed network connection
	E0919 20:01:22.382465       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41108: use of closed network connection
	E0919 20:01:22.557863       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41118: use of closed network connection
	E0919 20:01:22.733398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41132: use of closed network connection
	E0919 20:01:22.894715       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41142: use of closed network connection
	E0919 20:01:23.056367       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41162: use of closed network connection
	E0919 20:01:23.323830       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41182: use of closed network connection
	E0919 20:01:23.493609       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41202: use of closed network connection
	E0919 20:01:23.658689       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41232: use of closed network connection
	E0919 20:01:23.828833       1 conn.go:339] Error on socket receive: read tcp 192.168.39.87:8443->192.168.39.1:41246: use of closed network connection
	I0919 20:05:05.918255       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [69bde3b12f7d33021d4a5b784e9a8355feb38ad0f68cc72f6ce0e95f8090386d] <==
	I0919 20:06:45.776765       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0919 20:06:45.777871       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0919 20:06:45.784186       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0919 20:06:45.784292       1 aggregator.go:171] initial CRD sync complete...
	I0919 20:06:45.784322       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 20:06:45.784345       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 20:06:45.784367       1 cache.go:39] Caches are synced for autoregister controller
	I0919 20:06:45.792190       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0919 20:06:45.792249       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0919 20:06:45.793566       1 shared_informer.go:320] Caches are synced for configmaps
	I0919 20:06:45.793638       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 20:06:45.793669       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0919 20:06:45.793657       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 20:06:45.796363       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 20:06:45.796453       1 policy_source.go:224] refreshing policies
	I0919 20:06:45.800678       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0919 20:06:45.860543       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 20:06:46.666354       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 20:06:49.155327       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 20:06:49.261074       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 20:06:51.482852       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 20:06:51.611619       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0919 20:06:51.633518       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0919 20:06:51.714667       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 20:06:51.720298       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [30431653e0e43ed529bb73220f39ab0fe58f2228aca51af2005a98e730ee5eca] <==
	I0919 20:08:05.976002       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282812-m03" podCIDRs=["10.244.2.0/24"]
	I0919 20:08:05.976045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:05.976076       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:05.976365       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:06.285070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:06.625731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:09.339967       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:16.185517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:24.641637       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:08:24.641711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:24.653514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:29.252537       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:29.287286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:29.304071       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:29.869202       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:08:29.869610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:09:09.274052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:09:09.293261       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:09:09.309988       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.979155ms"
	I0919 20:09:09.310139       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.891µs"
	I0919 20:09:14.369680       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:09:29.096003       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-jrlhz"
	I0919 20:09:29.118753       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-jrlhz"
	I0919 20:09:29.118872       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-c4mtw"
	I0919 20:09:29.144945       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-c4mtw"
	
	
	==> kube-controller-manager [dc3ea0d6f2bb7d8185ff9489063147c6d86b5ff8c3873a280b52224abb053728] <==
	I0919 20:02:40.314473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:40.544592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:40.544738       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:02:41.567886       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:02:41.568810       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-282812-m03\" does not exist"
	I0919 20:02:41.590291       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-282812-m03" podCIDRs=["10.244.5.0/24"]
	I0919 20:02:41.590394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:41.590442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:41.599725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:41.913206       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:43.603485       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:02:51.859243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:00.415753       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m02"
	I0919 20:03:00.416221       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:00.427514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:03.594151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:38.610651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:03:38.610930       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-282812-m03"
	I0919 20:03:38.627877       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:03:38.665346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.40248ms"
	I0919 20:03:38.666339       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.857µs"
	I0919 20:03:43.667024       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:43.691649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	I0919 20:03:43.722289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m02"
	I0919 20:03:53.803569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-282812-m03"
	
	
	==> kube-proxy [15f0a92696c2dd48ea17d23a80293b334aafee2af059bc2b881cc64a2250c13a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 20:06:43.375354       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 20:06:45.761784       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.87"]
	E0919 20:06:45.761874       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 20:06:45.830218       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 20:06:45.830271       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 20:06:45.830296       1 server_linux.go:169] "Using iptables Proxier"
	I0919 20:06:45.832785       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 20:06:45.833138       1 server.go:483] "Version info" version="v1.31.1"
	I0919 20:06:45.833191       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:06:45.834941       1 config.go:199] "Starting service config controller"
	I0919 20:06:45.834993       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 20:06:45.835039       1 config.go:105] "Starting endpoint slice config controller"
	I0919 20:06:45.835058       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 20:06:45.835683       1 config.go:328] "Starting node config controller"
	I0919 20:06:45.835717       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 20:06:45.935302       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 20:06:45.935485       1 shared_informer.go:320] Caches are synced for service config
	I0919 20:06:45.937198       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e4f064262cf36ca3d58910c4531af34c73b1af06ae3e1699c3167b09e416b60b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 20:00:11.052505       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 20:00:11.062454       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.87"]
	E0919 20:00:11.062662       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 20:00:11.130244       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 20:00:11.130282       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 20:00:11.130304       1 server_linux.go:169] "Using iptables Proxier"
	I0919 20:00:11.133161       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 20:00:11.133461       1 server.go:483] "Version info" version="v1.31.1"
	I0919 20:00:11.133612       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:00:11.135205       1 config.go:199] "Starting service config controller"
	I0919 20:00:11.135255       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 20:00:11.135298       1 config.go:105] "Starting endpoint slice config controller"
	I0919 20:00:11.135314       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 20:00:11.135838       1 config.go:328] "Starting node config controller"
	I0919 20:00:11.135911       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 20:00:11.235444       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 20:00:11.235480       1 shared_informer.go:320] Caches are synced for service config
	I0919 20:00:11.236073       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [625d2fcd75cad78e0ad64623cb266fbfbbe327256db2040303a5740c9b0ed7af] <==
	E0919 20:00:01.790894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:01.790962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 20:00:01.790996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.603273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 20:00:02.603386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.651380       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 20:00:02.651530       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0919 20:00:02.673619       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 20:00:02.673667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.674430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 20:00:02.674535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.799592       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 20:00:02.799707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.855700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 20:00:02.855917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.860835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 20:00:02.860887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.924664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 20:00:02.924714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.982482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 20:00:02.982586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 20:00:02.995604       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 20:00:02.995664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0919 20:00:05.254301       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 20:05:05.920372       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [85476d7e8d2b82a4dc3231d06dcca93f418d33c58c1a55f9da28344d912aac0a] <==
	I0919 20:06:43.585665       1 serving.go:386] Generated self-signed cert in-memory
	W0919 20:06:45.737720       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 20:06:45.737808       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 20:06:45.737836       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 20:06:45.737866       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 20:06:45.762237       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0919 20:06:45.762363       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:06:45.764958       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 20:06:45.765263       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:06:45.765997       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 20:06:45.768076       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 20:06:45.866860       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 20:09:40 multinode-282812 kubelet[3693]: E0919 20:09:40.982675    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776580982367264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:09:50 multinode-282812 kubelet[3693]: E0919 20:09:50.845164    3693 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 19 20:09:50 multinode-282812 kubelet[3693]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 20:09:50 multinode-282812 kubelet[3693]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 20:09:50 multinode-282812 kubelet[3693]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 20:09:50 multinode-282812 kubelet[3693]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 20:09:50 multinode-282812 kubelet[3693]: E0919 20:09:50.983778    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776590983584629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:09:50 multinode-282812 kubelet[3693]: E0919 20:09:50.983801    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776590983584629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:00 multinode-282812 kubelet[3693]: E0919 20:10:00.990459    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776600984848671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:00 multinode-282812 kubelet[3693]: E0919 20:10:00.990942    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776600984848671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:10 multinode-282812 kubelet[3693]: E0919 20:10:10.992796    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776610992425855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:10 multinode-282812 kubelet[3693]: E0919 20:10:10.994046    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776610992425855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:21 multinode-282812 kubelet[3693]: E0919 20:10:21.000470    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776620997451780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:21 multinode-282812 kubelet[3693]: E0919 20:10:21.000519    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776620997451780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:31 multinode-282812 kubelet[3693]: E0919 20:10:31.001878    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776631001332870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:31 multinode-282812 kubelet[3693]: E0919 20:10:31.002357    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776631001332870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:41 multinode-282812 kubelet[3693]: E0919 20:10:41.005962    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776641005037261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:41 multinode-282812 kubelet[3693]: E0919 20:10:41.006058    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776641005037261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:50 multinode-282812 kubelet[3693]: E0919 20:10:50.846365    3693 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 19 20:10:50 multinode-282812 kubelet[3693]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 19 20:10:50 multinode-282812 kubelet[3693]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 20:10:50 multinode-282812 kubelet[3693]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 20:10:50 multinode-282812 kubelet[3693]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 20:10:51 multinode-282812 kubelet[3693]: E0919 20:10:51.010057    3693 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776651008508222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:10:51 multinode-282812 kubelet[3693]: E0919 20:10:51.010181    3693 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726776651008508222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 20:10:53.964628   50415 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19664-7917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-282812 -n multinode-282812
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-282812 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.73s)

                                                
                                    
x
+
TestPreload (178.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-937590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-937590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m34.078842s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-937590 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-937590 image pull gcr.io/k8s-minikube/busybox: (3.249218879s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-937590
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-937590: (7.284058781s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-937590 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-937590 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m11.124720153s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-937590 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-19 20:17:51.430545526 +0000 UTC m=+5926.132738155
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-937590 -n test-preload-937590
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-937590 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-937590 logs -n 25: (1.054379396s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n multinode-282812 sudo cat                                       | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /home/docker/cp-test_multinode-282812-m03_multinode-282812.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-282812 cp multinode-282812-m03:/home/docker/cp-test.txt                       | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m02:/home/docker/cp-test_multinode-282812-m03_multinode-282812-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n                                                                 | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | multinode-282812-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-282812 ssh -n multinode-282812-m02 sudo cat                                   | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	|         | /home/docker/cp-test_multinode-282812-m03_multinode-282812-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-282812 node stop m03                                                          | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:02 UTC |
	| node    | multinode-282812 node start                                                             | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:02 UTC | 19 Sep 24 20:03 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-282812                                                                | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:03 UTC |                     |
	| stop    | -p multinode-282812                                                                     | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:03 UTC |                     |
	| start   | -p multinode-282812                                                                     | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:05 UTC | 19 Sep 24 20:08 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-282812                                                                | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:08 UTC |                     |
	| node    | multinode-282812 node delete                                                            | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:08 UTC | 19 Sep 24 20:08 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-282812 stop                                                                   | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:08 UTC |                     |
	| start   | -p multinode-282812                                                                     | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:10 UTC | 19 Sep 24 20:14 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-282812                                                                | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:14 UTC |                     |
	| start   | -p multinode-282812-m02                                                                 | multinode-282812-m02 | jenkins | v1.34.0 | 19 Sep 24 20:14 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-282812-m03                                                                 | multinode-282812-m03 | jenkins | v1.34.0 | 19 Sep 24 20:14 UTC | 19 Sep 24 20:14 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-282812                                                                 | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:14 UTC |                     |
	| delete  | -p multinode-282812-m03                                                                 | multinode-282812-m03 | jenkins | v1.34.0 | 19 Sep 24 20:14 UTC | 19 Sep 24 20:14 UTC |
	| delete  | -p multinode-282812                                                                     | multinode-282812     | jenkins | v1.34.0 | 19 Sep 24 20:14 UTC | 19 Sep 24 20:14 UTC |
	| start   | -p test-preload-937590                                                                  | test-preload-937590  | jenkins | v1.34.0 | 19 Sep 24 20:14 UTC | 19 Sep 24 20:16 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-937590 image pull                                                          | test-preload-937590  | jenkins | v1.34.0 | 19 Sep 24 20:16 UTC | 19 Sep 24 20:16 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-937590                                                                  | test-preload-937590  | jenkins | v1.34.0 | 19 Sep 24 20:16 UTC | 19 Sep 24 20:16 UTC |
	| start   | -p test-preload-937590                                                                  | test-preload-937590  | jenkins | v1.34.0 | 19 Sep 24 20:16 UTC | 19 Sep 24 20:17 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-937590 image list                                                          | test-preload-937590  | jenkins | v1.34.0 | 19 Sep 24 20:17 UTC | 19 Sep 24 20:17 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 20:16:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 20:16:40.129098   52819 out.go:345] Setting OutFile to fd 1 ...
	I0919 20:16:40.129353   52819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:16:40.129362   52819 out.go:358] Setting ErrFile to fd 2...
	I0919 20:16:40.129369   52819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:16:40.129558   52819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 20:16:40.130068   52819 out.go:352] Setting JSON to false
	I0919 20:16:40.130999   52819 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7144,"bootTime":1726769856,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 20:16:40.131090   52819 start.go:139] virtualization: kvm guest
	I0919 20:16:40.133093   52819 out.go:177] * [test-preload-937590] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 20:16:40.134302   52819 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 20:16:40.134319   52819 notify.go:220] Checking for updates...
	I0919 20:16:40.136819   52819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 20:16:40.137936   52819 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 20:16:40.139097   52819 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 20:16:40.140241   52819 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 20:16:40.141364   52819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 20:16:40.142891   52819 config.go:182] Loaded profile config "test-preload-937590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0919 20:16:40.143297   52819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:16:40.143341   52819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:16:40.157624   52819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I0919 20:16:40.158109   52819 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:16:40.158629   52819 main.go:141] libmachine: Using API Version  1
	I0919 20:16:40.158649   52819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:16:40.158996   52819 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:16:40.159139   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:16:40.160711   52819 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0919 20:16:40.161801   52819 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 20:16:40.162078   52819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:16:40.162112   52819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:16:40.176079   52819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0919 20:16:40.176493   52819 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:16:40.176915   52819 main.go:141] libmachine: Using API Version  1
	I0919 20:16:40.176937   52819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:16:40.177249   52819 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:16:40.177430   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:16:40.211469   52819 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 20:16:40.212789   52819 start.go:297] selected driver: kvm2
	I0919 20:16:40.212800   52819 start.go:901] validating driver "kvm2" against &{Name:test-preload-937590 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-937590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:16:40.212906   52819 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 20:16:40.213584   52819 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 20:16:40.213660   52819 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 20:16:40.228010   52819 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 20:16:40.228390   52819 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 20:16:40.228439   52819 cni.go:84] Creating CNI manager for ""
	I0919 20:16:40.228498   52819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 20:16:40.228578   52819 start.go:340] cluster config:
	{Name:test-preload-937590 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-937590 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:16:40.228703   52819 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 20:16:40.230409   52819 out.go:177] * Starting "test-preload-937590" primary control-plane node in "test-preload-937590" cluster
	I0919 20:16:40.231686   52819 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0919 20:16:40.343237   52819 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0919 20:16:40.343274   52819 cache.go:56] Caching tarball of preloaded images
	I0919 20:16:40.343463   52819 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0919 20:16:40.345236   52819 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0919 20:16:40.346508   52819 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0919 20:16:40.456493   52819 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0919 20:16:53.742891   52819 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0919 20:16:53.742982   52819 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0919 20:16:54.578112   52819 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0919 20:16:54.578246   52819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/config.json ...
	I0919 20:16:54.578479   52819 start.go:360] acquireMachinesLock for test-preload-937590: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 20:16:54.578543   52819 start.go:364] duration metric: took 44.087µs to acquireMachinesLock for "test-preload-937590"
	I0919 20:16:54.578558   52819 start.go:96] Skipping create...Using existing machine configuration
	I0919 20:16:54.578564   52819 fix.go:54] fixHost starting: 
	I0919 20:16:54.578809   52819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:16:54.578841   52819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:16:54.593505   52819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I0919 20:16:54.593966   52819 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:16:54.594423   52819 main.go:141] libmachine: Using API Version  1
	I0919 20:16:54.594453   52819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:16:54.594788   52819 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:16:54.594953   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:16:54.595083   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetState
	I0919 20:16:54.596566   52819 fix.go:112] recreateIfNeeded on test-preload-937590: state=Stopped err=<nil>
	I0919 20:16:54.596584   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	W0919 20:16:54.596734   52819 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 20:16:54.598841   52819 out.go:177] * Restarting existing kvm2 VM for "test-preload-937590" ...
	I0919 20:16:54.600225   52819 main.go:141] libmachine: (test-preload-937590) Calling .Start
	I0919 20:16:54.600384   52819 main.go:141] libmachine: (test-preload-937590) Ensuring networks are active...
	I0919 20:16:54.601097   52819 main.go:141] libmachine: (test-preload-937590) Ensuring network default is active
	I0919 20:16:54.601423   52819 main.go:141] libmachine: (test-preload-937590) Ensuring network mk-test-preload-937590 is active
	I0919 20:16:54.601750   52819 main.go:141] libmachine: (test-preload-937590) Getting domain xml...
	I0919 20:16:54.602426   52819 main.go:141] libmachine: (test-preload-937590) Creating domain...
	I0919 20:16:55.786356   52819 main.go:141] libmachine: (test-preload-937590) Waiting to get IP...
	I0919 20:16:55.787198   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:16:55.787630   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:16:55.787722   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:16:55.787616   52902 retry.go:31] will retry after 203.727278ms: waiting for machine to come up
	I0919 20:16:55.992922   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:16:55.993348   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:16:55.993369   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:16:55.993318   52902 retry.go:31] will retry after 358.008613ms: waiting for machine to come up
	I0919 20:16:56.353040   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:16:56.353454   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:16:56.353495   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:16:56.353405   52902 retry.go:31] will retry after 483.918017ms: waiting for machine to come up
	I0919 20:16:56.839068   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:16:56.839477   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:16:56.839508   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:16:56.839422   52902 retry.go:31] will retry after 390.251622ms: waiting for machine to come up
	I0919 20:16:57.230884   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:16:57.231248   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:16:57.231271   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:16:57.231202   52902 retry.go:31] will retry after 628.993299ms: waiting for machine to come up
	I0919 20:16:57.861940   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:16:57.862325   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:16:57.862352   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:16:57.862275   52902 retry.go:31] will retry after 927.71868ms: waiting for machine to come up
	I0919 20:16:58.791440   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:16:58.791780   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:16:58.791825   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:16:58.791730   52902 retry.go:31] will retry after 867.672087ms: waiting for machine to come up
	I0919 20:16:59.661136   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:16:59.661523   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:16:59.661555   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:16:59.661466   52902 retry.go:31] will retry after 1.068848538s: waiting for machine to come up
	I0919 20:17:00.731444   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:00.731783   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:17:00.731839   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:17:00.731763   52902 retry.go:31] will retry after 1.842099803s: waiting for machine to come up
	I0919 20:17:02.575257   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:02.575587   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:17:02.575608   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:17:02.575538   52902 retry.go:31] will retry after 1.861136645s: waiting for machine to come up
	I0919 20:17:04.439727   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:04.440174   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:17:04.440205   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:17:04.440101   52902 retry.go:31] will retry after 2.863523041s: waiting for machine to come up
	I0919 20:17:07.304797   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:07.305225   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:17:07.305257   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:17:07.305198   52902 retry.go:31] will retry after 2.687578772s: waiting for machine to come up
	I0919 20:17:09.994422   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:09.994745   52819 main.go:141] libmachine: (test-preload-937590) DBG | unable to find current IP address of domain test-preload-937590 in network mk-test-preload-937590
	I0919 20:17:09.994769   52819 main.go:141] libmachine: (test-preload-937590) DBG | I0919 20:17:09.994717   52902 retry.go:31] will retry after 3.351740466s: waiting for machine to come up
	I0919 20:17:13.350362   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.350888   52819 main.go:141] libmachine: (test-preload-937590) Found IP for machine: 192.168.39.152
	I0919 20:17:13.350912   52819 main.go:141] libmachine: (test-preload-937590) Reserving static IP address...
	I0919 20:17:13.350928   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has current primary IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.351391   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "test-preload-937590", mac: "52:54:00:48:18:88", ip: "192.168.39.152"} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:13.351439   52819 main.go:141] libmachine: (test-preload-937590) DBG | skip adding static IP to network mk-test-preload-937590 - found existing host DHCP lease matching {name: "test-preload-937590", mac: "52:54:00:48:18:88", ip: "192.168.39.152"}
	I0919 20:17:13.351452   52819 main.go:141] libmachine: (test-preload-937590) Reserved static IP address: 192.168.39.152
	I0919 20:17:13.351466   52819 main.go:141] libmachine: (test-preload-937590) DBG | Getting to WaitForSSH function...
	I0919 20:17:13.351475   52819 main.go:141] libmachine: (test-preload-937590) Waiting for SSH to be available...
	I0919 20:17:13.353122   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.353498   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:13.353525   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.353637   52819 main.go:141] libmachine: (test-preload-937590) DBG | Using SSH client type: external
	I0919 20:17:13.353655   52819 main.go:141] libmachine: (test-preload-937590) DBG | Using SSH private key: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/test-preload-937590/id_rsa (-rw-------)
	I0919 20:17:13.353693   52819 main.go:141] libmachine: (test-preload-937590) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19664-7917/.minikube/machines/test-preload-937590/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 20:17:13.353703   52819 main.go:141] libmachine: (test-preload-937590) DBG | About to run SSH command:
	I0919 20:17:13.353723   52819 main.go:141] libmachine: (test-preload-937590) DBG | exit 0
	I0919 20:17:13.477083   52819 main.go:141] libmachine: (test-preload-937590) DBG | SSH cmd err, output: <nil>: 
	I0919 20:17:13.477457   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetConfigRaw
	I0919 20:17:13.478020   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetIP
	I0919 20:17:13.480366   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.480698   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:13.480726   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.480901   52819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/config.json ...
	I0919 20:17:13.481113   52819 machine.go:93] provisionDockerMachine start ...
	I0919 20:17:13.481131   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:17:13.481324   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:13.483211   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.483463   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:13.483484   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.483563   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:13.483728   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:13.483870   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:13.483978   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:13.484120   52819 main.go:141] libmachine: Using SSH client type: native
	I0919 20:17:13.484298   52819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0919 20:17:13.484310   52819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 20:17:13.589481   52819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0919 20:17:13.589507   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetMachineName
	I0919 20:17:13.589733   52819 buildroot.go:166] provisioning hostname "test-preload-937590"
	I0919 20:17:13.589760   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetMachineName
	I0919 20:17:13.589977   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:13.592491   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.592769   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:13.592795   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.592941   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:13.593112   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:13.593261   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:13.593379   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:13.593648   52819 main.go:141] libmachine: Using SSH client type: native
	I0919 20:17:13.593810   52819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0919 20:17:13.593822   52819 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-937590 && echo "test-preload-937590" | sudo tee /etc/hostname
	I0919 20:17:13.717430   52819 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-937590
	
	I0919 20:17:13.717462   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:13.720036   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.720370   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:13.720403   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.720586   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:13.720736   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:13.720891   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:13.721003   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:13.721207   52819 main.go:141] libmachine: Using SSH client type: native
	I0919 20:17:13.721396   52819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0919 20:17:13.721415   52819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-937590' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-937590/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-937590' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 20:17:13.834797   52819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 20:17:13.834824   52819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 20:17:13.834847   52819 buildroot.go:174] setting up certificates
	I0919 20:17:13.834856   52819 provision.go:84] configureAuth start
	I0919 20:17:13.834865   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetMachineName
	I0919 20:17:13.835192   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetIP
	I0919 20:17:13.837778   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.838271   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:13.838300   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.838457   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:13.840711   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.841034   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:13.841059   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.841211   52819 provision.go:143] copyHostCerts
	I0919 20:17:13.841297   52819 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 20:17:13.841310   52819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 20:17:13.841398   52819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 20:17:13.841508   52819 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 20:17:13.841520   52819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 20:17:13.841557   52819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 20:17:13.841632   52819 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 20:17:13.841643   52819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 20:17:13.841675   52819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 20:17:13.841754   52819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.test-preload-937590 san=[127.0.0.1 192.168.39.152 localhost minikube test-preload-937590]
	I0919 20:17:13.989453   52819 provision.go:177] copyRemoteCerts
	I0919 20:17:13.989512   52819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 20:17:13.989537   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:13.992298   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.992593   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:13.992627   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:13.992777   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:13.992920   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:13.993083   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:13.993200   52819 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/test-preload-937590/id_rsa Username:docker}
	I0919 20:17:14.075576   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 20:17:14.099045   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0919 20:17:14.123093   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 20:17:14.145984   52819 provision.go:87] duration metric: took 311.113378ms to configureAuth
	I0919 20:17:14.146011   52819 buildroot.go:189] setting minikube options for container-runtime
	I0919 20:17:14.146180   52819 config.go:182] Loaded profile config "test-preload-937590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0919 20:17:14.146250   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:14.148998   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.149404   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:14.149439   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.149637   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:14.149826   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:14.149966   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:14.150085   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:14.150230   52819 main.go:141] libmachine: Using SSH client type: native
	I0919 20:17:14.150429   52819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0919 20:17:14.150450   52819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 20:17:14.371113   52819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 20:17:14.371142   52819 machine.go:96] duration metric: took 890.016085ms to provisionDockerMachine
	I0919 20:17:14.371161   52819 start.go:293] postStartSetup for "test-preload-937590" (driver="kvm2")
	I0919 20:17:14.371174   52819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 20:17:14.371190   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:17:14.371508   52819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 20:17:14.371542   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:14.374067   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.374416   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:14.374437   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.374585   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:14.374731   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:14.374866   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:14.374978   52819 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/test-preload-937590/id_rsa Username:docker}
	I0919 20:17:14.461564   52819 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 20:17:14.466050   52819 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 20:17:14.466073   52819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 20:17:14.466129   52819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 20:17:14.466219   52819 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 20:17:14.466308   52819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 20:17:14.477568   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 20:17:14.503056   52819 start.go:296] duration metric: took 131.880125ms for postStartSetup
	I0919 20:17:14.503095   52819 fix.go:56] duration metric: took 19.924531245s for fixHost
	I0919 20:17:14.503116   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:14.505918   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.506405   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:14.506434   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.506584   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:14.506779   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:14.506947   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:14.507080   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:14.507262   52819 main.go:141] libmachine: Using SSH client type: native
	I0919 20:17:14.507424   52819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0919 20:17:14.507433   52819 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 20:17:14.614226   52819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726777034.590865239
	
	I0919 20:17:14.614252   52819 fix.go:216] guest clock: 1726777034.590865239
	I0919 20:17:14.614263   52819 fix.go:229] Guest: 2024-09-19 20:17:14.590865239 +0000 UTC Remote: 2024-09-19 20:17:14.503099288 +0000 UTC m=+34.406816885 (delta=87.765951ms)
	I0919 20:17:14.614285   52819 fix.go:200] guest clock delta is within tolerance: 87.765951ms
	I0919 20:17:14.614290   52819 start.go:83] releasing machines lock for "test-preload-937590", held for 20.035736387s
	I0919 20:17:14.614310   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:17:14.614632   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetIP
	I0919 20:17:14.617307   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.617615   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:14.617644   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.617781   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:17:14.618210   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:17:14.618385   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:17:14.618483   52819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 20:17:14.618520   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:14.618569   52819 ssh_runner.go:195] Run: cat /version.json
	I0919 20:17:14.618605   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:14.621271   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.621295   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.621572   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:14.621591   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.621612   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:14.621629   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:14.621799   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:14.621852   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:14.621944   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:14.622010   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:14.622100   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:14.622164   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:14.622254   52819 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/test-preload-937590/id_rsa Username:docker}
	I0919 20:17:14.622292   52819 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/test-preload-937590/id_rsa Username:docker}
	I0919 20:17:14.720410   52819 ssh_runner.go:195] Run: systemctl --version
	I0919 20:17:14.726600   52819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 20:17:14.866766   52819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 20:17:14.872612   52819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 20:17:14.872674   52819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 20:17:14.888823   52819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 20:17:14.888843   52819 start.go:495] detecting cgroup driver to use...
	I0919 20:17:14.888912   52819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 20:17:14.904886   52819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 20:17:14.918883   52819 docker.go:217] disabling cri-docker service (if available) ...
	I0919 20:17:14.918946   52819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 20:17:14.933251   52819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 20:17:14.947670   52819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 20:17:15.065875   52819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 20:17:15.213800   52819 docker.go:233] disabling docker service ...
	I0919 20:17:15.213864   52819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 20:17:15.228194   52819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 20:17:15.241178   52819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 20:17:15.357206   52819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 20:17:15.469540   52819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 20:17:15.490906   52819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 20:17:15.509215   52819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0919 20:17:15.509286   52819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:17:15.519872   52819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 20:17:15.519937   52819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:17:15.530945   52819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:17:15.542815   52819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:17:15.553781   52819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 20:17:15.564797   52819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:17:15.575568   52819 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:17:15.592484   52819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:17:15.603158   52819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 20:17:15.612809   52819 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 20:17:15.612876   52819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 20:17:15.626392   52819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 20:17:15.636044   52819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:17:15.750978   52819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 20:17:15.840890   52819 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 20:17:15.840953   52819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 20:17:15.845692   52819 start.go:563] Will wait 60s for crictl version
	I0919 20:17:15.845738   52819 ssh_runner.go:195] Run: which crictl
	I0919 20:17:15.849707   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 20:17:15.889995   52819 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 20:17:15.890103   52819 ssh_runner.go:195] Run: crio --version
	I0919 20:17:15.922139   52819 ssh_runner.go:195] Run: crio --version
	I0919 20:17:15.951933   52819 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0919 20:17:15.953295   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetIP
	I0919 20:17:15.956004   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:15.956322   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:15.956340   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:15.956512   52819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 20:17:15.960716   52819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 20:17:15.973890   52819 kubeadm.go:883] updating cluster {Name:test-preload-937590 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-937590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 20:17:15.974027   52819 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0919 20:17:15.974075   52819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:17:16.010483   52819 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0919 20:17:16.010555   52819 ssh_runner.go:195] Run: which lz4
	I0919 20:17:16.014839   52819 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 20:17:16.019093   52819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 20:17:16.019146   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0919 20:17:17.582211   52819 crio.go:462] duration metric: took 1.567402993s to copy over tarball
	I0919 20:17:17.582288   52819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 20:17:19.951808   52819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.369494159s)
	I0919 20:17:19.951836   52819 crio.go:469] duration metric: took 2.369600537s to extract the tarball
	I0919 20:17:19.951845   52819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 20:17:19.992965   52819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:17:20.038914   52819 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0919 20:17:20.038939   52819 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 20:17:20.038999   52819 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 20:17:20.039050   52819 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0919 20:17:20.039074   52819 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0919 20:17:20.039083   52819 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0919 20:17:20.039050   52819 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0919 20:17:20.039131   52819 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0919 20:17:20.039141   52819 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 20:17:20.039151   52819 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 20:17:20.040550   52819 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0919 20:17:20.040582   52819 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0919 20:17:20.040583   52819 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 20:17:20.040551   52819 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0919 20:17:20.040555   52819 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0919 20:17:20.040615   52819 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0919 20:17:20.040555   52819 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 20:17:20.040622   52819 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 20:17:20.195576   52819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0919 20:17:20.216427   52819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0919 20:17:20.222962   52819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 20:17:20.232982   52819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0919 20:17:20.237972   52819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0919 20:17:20.244533   52819 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0919 20:17:20.244579   52819 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0919 20:17:20.244621   52819 ssh_runner.go:195] Run: which crictl
	I0919 20:17:20.280092   52819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0919 20:17:20.292327   52819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0919 20:17:20.293654   52819 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0919 20:17:20.293685   52819 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0919 20:17:20.293732   52819 ssh_runner.go:195] Run: which crictl
	I0919 20:17:20.333795   52819 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0919 20:17:20.333832   52819 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 20:17:20.333879   52819 ssh_runner.go:195] Run: which crictl
	I0919 20:17:20.367287   52819 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0919 20:17:20.367329   52819 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 20:17:20.367354   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0919 20:17:20.367369   52819 ssh_runner.go:195] Run: which crictl
	I0919 20:17:20.367419   52819 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0919 20:17:20.367444   52819 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0919 20:17:20.367480   52819 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0919 20:17:20.367518   52819 ssh_runner.go:195] Run: which crictl
	I0919 20:17:20.367449   52819 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0919 20:17:20.367594   52819 ssh_runner.go:195] Run: which crictl
	I0919 20:17:20.397007   52819 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0919 20:17:20.397041   52819 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0919 20:17:20.397091   52819 ssh_runner.go:195] Run: which crictl
	I0919 20:17:20.397100   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0919 20:17:20.397158   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 20:17:20.423947   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0919 20:17:20.423947   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0919 20:17:20.424061   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0919 20:17:20.424077   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0919 20:17:20.424109   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0919 20:17:20.516817   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 20:17:20.516922   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0919 20:17:20.572415   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0919 20:17:20.572454   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0919 20:17:20.572509   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0919 20:17:20.572588   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0919 20:17:20.572619   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0919 20:17:20.592759   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 20:17:20.658485   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0919 20:17:20.749123   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0919 20:17:20.749123   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0919 20:17:20.749205   52819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0919 20:17:20.749271   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0919 20:17:20.749304   52819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0919 20:17:20.749324   52819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0919 20:17:20.749403   52819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0919 20:17:20.749497   52819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0919 20:17:20.761059   52819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0919 20:17:20.761186   52819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0919 20:17:20.842200   52819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0919 20:17:20.842304   52819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0919 20:17:20.851065   52819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0919 20:17:20.851095   52819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0919 20:17:20.851153   52819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0919 20:17:20.851178   52819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0919 20:17:20.851178   52819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0919 20:17:20.851200   52819 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0919 20:17:20.851218   52819 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0919 20:17:20.851221   52819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0919 20:17:20.851246   52819 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0919 20:17:20.851254   52819 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0919 20:17:20.851291   52819 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0919 20:17:20.851313   52819 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0919 20:17:21.249247   52819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 20:17:23.502030   52819 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.650785346s)
	I0919 20:17:23.502061   52819 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.650791271s)
	I0919 20:17:23.502076   52819 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0919 20:17:23.502077   52819 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0919 20:17:23.502085   52819 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0919 20:17:23.502134   52819 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0919 20:17:23.502135   52819 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.650938515s)
	I0919 20:17:23.502162   52819 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0919 20:17:23.502202   52819 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.651004364s)
	I0919 20:17:23.502226   52819 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0919 20:17:23.502228   52819 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.252961661s)
	I0919 20:17:23.644286   52819 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0919 20:17:23.644319   52819 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0919 20:17:23.644372   52819 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0919 20:17:24.391464   52819 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0919 20:17:24.391490   52819 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0919 20:17:24.391545   52819 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0919 20:17:25.137840   52819 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0919 20:17:25.137873   52819 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0919 20:17:25.137923   52819 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0919 20:17:27.082914   52819 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (1.944971244s)
	I0919 20:17:27.082940   52819 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0919 20:17:27.082947   52819 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0919 20:17:27.082990   52819 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0919 20:17:27.923037   52819 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0919 20:17:27.923068   52819 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0919 20:17:27.923149   52819 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0919 20:17:28.362163   52819 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19664-7917/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0919 20:17:28.362220   52819 cache_images.go:123] Successfully loaded all cached images
	I0919 20:17:28.362228   52819 cache_images.go:92] duration metric: took 8.32327739s to LoadCachedImages
	I0919 20:17:28.362242   52819 kubeadm.go:934] updating node { 192.168.39.152 8443 v1.24.4 crio true true} ...
	I0919 20:17:28.362398   52819 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-937590 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-937590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 20:17:28.362475   52819 ssh_runner.go:195] Run: crio config
	I0919 20:17:28.410834   52819 cni.go:84] Creating CNI manager for ""
	I0919 20:17:28.410856   52819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 20:17:28.410865   52819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 20:17:28.410881   52819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-937590 NodeName:test-preload-937590 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 20:17:28.411037   52819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-937590"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 20:17:28.411121   52819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0919 20:17:28.420955   52819 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 20:17:28.421016   52819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 20:17:28.430447   52819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0919 20:17:28.446745   52819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 20:17:28.462897   52819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0919 20:17:28.479545   52819 ssh_runner.go:195] Run: grep 192.168.39.152	control-plane.minikube.internal$ /etc/hosts
	I0919 20:17:28.483304   52819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 20:17:28.495738   52819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:17:28.615871   52819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 20:17:28.633103   52819 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590 for IP: 192.168.39.152
	I0919 20:17:28.633130   52819 certs.go:194] generating shared ca certs ...
	I0919 20:17:28.633155   52819 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 20:17:28.633324   52819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 20:17:28.633381   52819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 20:17:28.633394   52819 certs.go:256] generating profile certs ...
	I0919 20:17:28.633520   52819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/client.key
	I0919 20:17:28.633599   52819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/apiserver.key.618b1c47
	I0919 20:17:28.633654   52819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/proxy-client.key
	I0919 20:17:28.633810   52819 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 20:17:28.633850   52819 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 20:17:28.633862   52819 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 20:17:28.633898   52819 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 20:17:28.633921   52819 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 20:17:28.633945   52819 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 20:17:28.633997   52819 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 20:17:28.634922   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 20:17:28.680512   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 20:17:28.720590   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 20:17:28.749399   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 20:17:28.776072   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0919 20:17:28.802157   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 20:17:28.843696   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 20:17:28.870780   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 20:17:28.894483   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 20:17:28.917916   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 20:17:28.941415   52819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 20:17:28.964940   52819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 20:17:28.981403   52819 ssh_runner.go:195] Run: openssl version
	I0919 20:17:28.987302   52819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 20:17:28.997914   52819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 20:17:29.002467   52819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 20:17:29.002525   52819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 20:17:29.008531   52819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 20:17:29.018894   52819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 20:17:29.029563   52819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 20:17:29.033956   52819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 20:17:29.033997   52819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 20:17:29.039696   52819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 20:17:29.049879   52819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 20:17:29.060179   52819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:17:29.064651   52819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:17:29.064712   52819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:17:29.070343   52819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 20:17:29.080686   52819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 20:17:29.084860   52819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 20:17:29.090630   52819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 20:17:29.096526   52819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 20:17:29.102377   52819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 20:17:29.107979   52819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 20:17:29.113937   52819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 20:17:29.119590   52819 kubeadm.go:392] StartCluster: {Name:test-preload-937590 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-937590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:17:29.119697   52819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 20:17:29.119745   52819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 20:17:29.165428   52819 cri.go:89] found id: ""
	I0919 20:17:29.165503   52819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 20:17:29.175866   52819 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 20:17:29.175889   52819 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0919 20:17:29.175937   52819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 20:17:29.185406   52819 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 20:17:29.185853   52819 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-937590" does not appear in /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 20:17:29.185993   52819 kubeconfig.go:62] /home/jenkins/minikube-integration/19664-7917/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-937590" cluster setting kubeconfig missing "test-preload-937590" context setting]
	I0919 20:17:29.186269   52819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/kubeconfig: {Name:mk632e082e805bb0ee3f336087f78588814f24af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 20:17:29.186874   52819 kapi.go:59] client config for test-preload-937590: &rest.Config{Host:"https://192.168.39.152:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 20:17:29.187593   52819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 20:17:29.196905   52819 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.152
	I0919 20:17:29.196935   52819 kubeadm.go:1160] stopping kube-system containers ...
	I0919 20:17:29.196955   52819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0919 20:17:29.197007   52819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 20:17:29.231475   52819 cri.go:89] found id: ""
	I0919 20:17:29.231545   52819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 20:17:29.247566   52819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 20:17:29.257321   52819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 20:17:29.257338   52819 kubeadm.go:157] found existing configuration files:
	
	I0919 20:17:29.257375   52819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 20:17:29.266201   52819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 20:17:29.266250   52819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 20:17:29.275358   52819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 20:17:29.284039   52819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 20:17:29.284082   52819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 20:17:29.293134   52819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 20:17:29.301672   52819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 20:17:29.301734   52819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 20:17:29.310422   52819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 20:17:29.318555   52819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 20:17:29.318599   52819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 20:17:29.327283   52819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 20:17:29.336498   52819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 20:17:29.438980   52819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 20:17:29.972137   52819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 20:17:30.229036   52819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 20:17:30.301112   52819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 20:17:30.426696   52819 api_server.go:52] waiting for apiserver process to appear ...
	I0919 20:17:30.426777   52819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 20:17:30.927764   52819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 20:17:31.427752   52819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 20:17:31.446546   52819 api_server.go:72] duration metric: took 1.019839808s to wait for apiserver process to appear ...
	I0919 20:17:31.446569   52819 api_server.go:88] waiting for apiserver healthz status ...
	I0919 20:17:31.446586   52819 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0919 20:17:31.447027   52819 api_server.go:269] stopped: https://192.168.39.152:8443/healthz: Get "https://192.168.39.152:8443/healthz": dial tcp 192.168.39.152:8443: connect: connection refused
	I0919 20:17:31.947711   52819 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0919 20:17:35.379220   52819 api_server.go:279] https://192.168.39.152:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 20:17:35.379250   52819 api_server.go:103] status: https://192.168.39.152:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 20:17:35.379267   52819 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0919 20:17:35.394943   52819 api_server.go:279] https://192.168.39.152:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 20:17:35.394969   52819 api_server.go:103] status: https://192.168.39.152:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 20:17:35.447244   52819 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0919 20:17:35.454215   52819 api_server.go:279] https://192.168.39.152:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0919 20:17:35.454239   52819 api_server.go:103] status: https://192.168.39.152:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0919 20:17:35.946832   52819 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0919 20:17:35.952050   52819 api_server.go:279] https://192.168.39.152:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 20:17:35.952076   52819 api_server.go:103] status: https://192.168.39.152:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 20:17:36.446737   52819 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0919 20:17:36.452554   52819 api_server.go:279] https://192.168.39.152:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 20:17:36.452591   52819 api_server.go:103] status: https://192.168.39.152:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 20:17:36.947121   52819 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0919 20:17:36.957803   52819 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0919 20:17:36.964738   52819 api_server.go:141] control plane version: v1.24.4
	I0919 20:17:36.964765   52819 api_server.go:131] duration metric: took 5.518190206s to wait for apiserver health ...
	I0919 20:17:36.964773   52819 cni.go:84] Creating CNI manager for ""
	I0919 20:17:36.964779   52819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 20:17:36.966769   52819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 20:17:36.968491   52819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 20:17:36.979041   52819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 20:17:36.996904   52819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 20:17:36.996988   52819 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 20:17:36.997005   52819 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 20:17:37.006679   52819 system_pods.go:59] 8 kube-system pods found
	I0919 20:17:37.006709   52819 system_pods.go:61] "coredns-6d4b75cb6d-fcgqg" [dfe227fd-c850-4a06-80be-ce316e87c0fa] Running
	I0919 20:17:37.006718   52819 system_pods.go:61] "coredns-6d4b75cb6d-zb6x7" [d1f6fb7e-ffa3-4281-bd9f-32642f984a02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 20:17:37.006728   52819 system_pods.go:61] "etcd-test-preload-937590" [c634d80c-8804-4f19-846c-c92fc1467083] Running
	I0919 20:17:37.006734   52819 system_pods.go:61] "kube-apiserver-test-preload-937590" [01e741ba-207c-4fec-93e6-b414f4feb1c9] Running
	I0919 20:17:37.006741   52819 system_pods.go:61] "kube-controller-manager-test-preload-937590" [caaa27eb-a604-467a-9bf2-e1bc56f4da96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 20:17:37.006747   52819 system_pods.go:61] "kube-proxy-l9j5z" [9aedab61-52e6-4dd6-b1f5-abb9deee6d24] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 20:17:37.006753   52819 system_pods.go:61] "kube-scheduler-test-preload-937590" [240ec5c9-aa69-4664-80e4-0b87938d1ba0] Running
	I0919 20:17:37.006760   52819 system_pods.go:61] "storage-provisioner" [fff94643-ff74-4fe8-a9d5-b339bd0abe07] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 20:17:37.006769   52819 system_pods.go:74] duration metric: took 9.842351ms to wait for pod list to return data ...
	I0919 20:17:37.006780   52819 node_conditions.go:102] verifying NodePressure condition ...
	I0919 20:17:37.010303   52819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 20:17:37.010325   52819 node_conditions.go:123] node cpu capacity is 2
	I0919 20:17:37.010334   52819 node_conditions.go:105] duration metric: took 3.549394ms to run NodePressure ...
	I0919 20:17:37.010351   52819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 20:17:37.197621   52819 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0919 20:17:37.202830   52819 kubeadm.go:739] kubelet initialised
	I0919 20:17:37.202850   52819 kubeadm.go:740] duration metric: took 5.206329ms waiting for restarted kubelet to initialise ...
	I0919 20:17:37.202857   52819 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 20:17:37.208258   52819 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-fcgqg" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:37.212501   52819 pod_ready.go:98] node "test-preload-937590" hosting pod "coredns-6d4b75cb6d-fcgqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:37.212522   52819 pod_ready.go:82] duration metric: took 4.241587ms for pod "coredns-6d4b75cb6d-fcgqg" in "kube-system" namespace to be "Ready" ...
	E0919 20:17:37.212530   52819 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-937590" hosting pod "coredns-6d4b75cb6d-fcgqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:37.212537   52819 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-zb6x7" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:37.216508   52819 pod_ready.go:98] node "test-preload-937590" hosting pod "coredns-6d4b75cb6d-zb6x7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:37.216524   52819 pod_ready.go:82] duration metric: took 3.977612ms for pod "coredns-6d4b75cb6d-zb6x7" in "kube-system" namespace to be "Ready" ...
	E0919 20:17:37.216531   52819 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-937590" hosting pod "coredns-6d4b75cb6d-zb6x7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:37.216536   52819 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:37.220333   52819 pod_ready.go:98] node "test-preload-937590" hosting pod "etcd-test-preload-937590" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:37.220352   52819 pod_ready.go:82] duration metric: took 3.807952ms for pod "etcd-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	E0919 20:17:37.220360   52819 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-937590" hosting pod "etcd-test-preload-937590" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:37.220366   52819 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:37.400504   52819 pod_ready.go:98] node "test-preload-937590" hosting pod "kube-apiserver-test-preload-937590" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:37.400528   52819 pod_ready.go:82] duration metric: took 180.151692ms for pod "kube-apiserver-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	E0919 20:17:37.400536   52819 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-937590" hosting pod "kube-apiserver-test-preload-937590" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:37.400542   52819 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:37.802693   52819 pod_ready.go:98] node "test-preload-937590" hosting pod "kube-controller-manager-test-preload-937590" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:37.802726   52819 pod_ready.go:82] duration metric: took 402.174398ms for pod "kube-controller-manager-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	E0919 20:17:37.802739   52819 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-937590" hosting pod "kube-controller-manager-test-preload-937590" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:37.802747   52819 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l9j5z" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:38.201778   52819 pod_ready.go:98] node "test-preload-937590" hosting pod "kube-proxy-l9j5z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:38.201802   52819 pod_ready.go:82] duration metric: took 399.045405ms for pod "kube-proxy-l9j5z" in "kube-system" namespace to be "Ready" ...
	E0919 20:17:38.201811   52819 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-937590" hosting pod "kube-proxy-l9j5z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:38.201818   52819 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:38.600130   52819 pod_ready.go:98] node "test-preload-937590" hosting pod "kube-scheduler-test-preload-937590" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:38.600157   52819 pod_ready.go:82] duration metric: took 398.332672ms for pod "kube-scheduler-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	E0919 20:17:38.600167   52819 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-937590" hosting pod "kube-scheduler-test-preload-937590" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:38.600173   52819 pod_ready.go:39] duration metric: took 1.397307908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 20:17:38.600206   52819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 20:17:38.612671   52819 ops.go:34] apiserver oom_adj: -16
	I0919 20:17:38.612695   52819 kubeadm.go:597] duration metric: took 9.43679862s to restartPrimaryControlPlane
	I0919 20:17:38.612706   52819 kubeadm.go:394] duration metric: took 9.493121916s to StartCluster
	I0919 20:17:38.612769   52819 settings.go:142] acquiring lock: {Name:mk58f627f177d13dd5c0d47e681e886cab43cce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 20:17:38.612838   52819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 20:17:38.613466   52819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/kubeconfig: {Name:mk632e082e805bb0ee3f336087f78588814f24af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 20:17:38.613703   52819 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 20:17:38.613775   52819 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 20:17:38.613864   52819 config.go:182] Loaded profile config "test-preload-937590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0919 20:17:38.613885   52819 addons.go:69] Setting storage-provisioner=true in profile "test-preload-937590"
	I0919 20:17:38.613897   52819 addons.go:69] Setting default-storageclass=true in profile "test-preload-937590"
	I0919 20:17:38.613903   52819 addons.go:234] Setting addon storage-provisioner=true in "test-preload-937590"
	W0919 20:17:38.613915   52819 addons.go:243] addon storage-provisioner should already be in state true
	I0919 20:17:38.613918   52819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-937590"
	I0919 20:17:38.613941   52819 host.go:66] Checking if "test-preload-937590" exists ...
	I0919 20:17:38.614264   52819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:17:38.614320   52819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:17:38.614265   52819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:17:38.614383   52819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:17:38.615575   52819 out.go:177] * Verifying Kubernetes components...
	I0919 20:17:38.617267   52819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:17:38.629043   52819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42169
	I0919 20:17:38.629079   52819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I0919 20:17:38.629518   52819 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:17:38.629563   52819 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:17:38.630046   52819 main.go:141] libmachine: Using API Version  1
	I0919 20:17:38.630064   52819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:17:38.630183   52819 main.go:141] libmachine: Using API Version  1
	I0919 20:17:38.630208   52819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:17:38.630326   52819 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:17:38.630542   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetState
	I0919 20:17:38.630545   52819 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:17:38.631124   52819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:17:38.631167   52819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:17:38.632591   52819 kapi.go:59] client config for test-preload-937590: &rest.Config{Host:"https://192.168.39.152:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/profiles/test-preload-937590/client.key", CAFile:"/home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 20:17:38.632803   52819 addons.go:234] Setting addon default-storageclass=true in "test-preload-937590"
	W0919 20:17:38.632817   52819 addons.go:243] addon default-storageclass should already be in state true
	I0919 20:17:38.632837   52819 host.go:66] Checking if "test-preload-937590" exists ...
	I0919 20:17:38.633079   52819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:17:38.633113   52819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:17:38.647099   52819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0919 20:17:38.647582   52819 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:17:38.648102   52819 main.go:141] libmachine: Using API Version  1
	I0919 20:17:38.648122   52819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:17:38.648451   52819 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:17:38.649021   52819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:17:38.649108   52819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:17:38.649604   52819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33561
	I0919 20:17:38.669765   52819 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:17:38.670374   52819 main.go:141] libmachine: Using API Version  1
	I0919 20:17:38.670395   52819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:17:38.670793   52819 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:17:38.671038   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetState
	I0919 20:17:38.673125   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:17:38.675447   52819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 20:17:38.677234   52819 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 20:17:38.677257   52819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 20:17:38.677279   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:38.680668   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:38.681173   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:38.681199   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:38.681397   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:38.681613   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:38.681784   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:38.681923   52819 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/test-preload-937590/id_rsa Username:docker}
	I0919 20:17:38.685783   52819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45129
	I0919 20:17:38.686243   52819 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:17:38.686778   52819 main.go:141] libmachine: Using API Version  1
	I0919 20:17:38.686806   52819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:17:38.687126   52819 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:17:38.687315   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetState
	I0919 20:17:38.688805   52819 main.go:141] libmachine: (test-preload-937590) Calling .DriverName
	I0919 20:17:38.688997   52819 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 20:17:38.689013   52819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 20:17:38.689029   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHHostname
	I0919 20:17:38.691275   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:38.691649   52819 main.go:141] libmachine: (test-preload-937590) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:18:88", ip: ""} in network mk-test-preload-937590: {Iface:virbr1 ExpiryTime:2024-09-19 21:17:05 +0000 UTC Type:0 Mac:52:54:00:48:18:88 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:test-preload-937590 Clientid:01:52:54:00:48:18:88}
	I0919 20:17:38.691686   52819 main.go:141] libmachine: (test-preload-937590) DBG | domain test-preload-937590 has defined IP address 192.168.39.152 and MAC address 52:54:00:48:18:88 in network mk-test-preload-937590
	I0919 20:17:38.691810   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHPort
	I0919 20:17:38.691970   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHKeyPath
	I0919 20:17:38.692127   52819 main.go:141] libmachine: (test-preload-937590) Calling .GetSSHUsername
	I0919 20:17:38.692244   52819 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/test-preload-937590/id_rsa Username:docker}
	I0919 20:17:38.796419   52819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 20:17:38.814316   52819 node_ready.go:35] waiting up to 6m0s for node "test-preload-937590" to be "Ready" ...
	I0919 20:17:38.895194   52819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 20:17:38.904651   52819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 20:17:39.870564   52819 main.go:141] libmachine: Making call to close driver server
	I0919 20:17:39.870592   52819 main.go:141] libmachine: (test-preload-937590) Calling .Close
	I0919 20:17:39.870611   52819 main.go:141] libmachine: Making call to close driver server
	I0919 20:17:39.870630   52819 main.go:141] libmachine: (test-preload-937590) Calling .Close
	I0919 20:17:39.870883   52819 main.go:141] libmachine: Successfully made call to close driver server
	I0919 20:17:39.870899   52819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 20:17:39.870907   52819 main.go:141] libmachine: Making call to close driver server
	I0919 20:17:39.870914   52819 main.go:141] libmachine: (test-preload-937590) Calling .Close
	I0919 20:17:39.871041   52819 main.go:141] libmachine: (test-preload-937590) DBG | Closing plugin on server side
	I0919 20:17:39.871058   52819 main.go:141] libmachine: Successfully made call to close driver server
	I0919 20:17:39.871076   52819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 20:17:39.871090   52819 main.go:141] libmachine: Making call to close driver server
	I0919 20:17:39.871100   52819 main.go:141] libmachine: (test-preload-937590) Calling .Close
	I0919 20:17:39.871103   52819 main.go:141] libmachine: Successfully made call to close driver server
	I0919 20:17:39.871117   52819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 20:17:39.871100   52819 main.go:141] libmachine: (test-preload-937590) DBG | Closing plugin on server side
	I0919 20:17:39.871290   52819 main.go:141] libmachine: Successfully made call to close driver server
	I0919 20:17:39.871311   52819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 20:17:39.871311   52819 main.go:141] libmachine: (test-preload-937590) DBG | Closing plugin on server side
	I0919 20:17:39.882165   52819 main.go:141] libmachine: Making call to close driver server
	I0919 20:17:39.882185   52819 main.go:141] libmachine: (test-preload-937590) Calling .Close
	I0919 20:17:39.882436   52819 main.go:141] libmachine: Successfully made call to close driver server
	I0919 20:17:39.882449   52819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 20:17:39.884627   52819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0919 20:17:39.886141   52819 addons.go:510] duration metric: took 1.272378999s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 20:17:40.819589   52819 node_ready.go:53] node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:43.318543   52819 node_ready.go:53] node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:45.318745   52819 node_ready.go:53] node "test-preload-937590" has status "Ready":"False"
	I0919 20:17:45.817942   52819 node_ready.go:49] node "test-preload-937590" has status "Ready":"True"
	I0919 20:17:45.817965   52819 node_ready.go:38] duration metric: took 7.003621373s for node "test-preload-937590" to be "Ready" ...
	I0919 20:17:45.817973   52819 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 20:17:45.824742   52819 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-fcgqg" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:45.833363   52819 pod_ready.go:93] pod "coredns-6d4b75cb6d-fcgqg" in "kube-system" namespace has status "Ready":"True"
	I0919 20:17:45.833387   52819 pod_ready.go:82] duration metric: took 8.614497ms for pod "coredns-6d4b75cb6d-fcgqg" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:45.833405   52819 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:47.843023   52819 pod_ready.go:103] pod "etcd-test-preload-937590" in "kube-system" namespace has status "Ready":"False"
	I0919 20:17:50.342143   52819 pod_ready.go:93] pod "etcd-test-preload-937590" in "kube-system" namespace has status "Ready":"True"
	I0919 20:17:50.342170   52819 pod_ready.go:82] duration metric: took 4.508756052s for pod "etcd-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:50.342179   52819 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:50.347115   52819 pod_ready.go:93] pod "kube-apiserver-test-preload-937590" in "kube-system" namespace has status "Ready":"True"
	I0919 20:17:50.347136   52819 pod_ready.go:82] duration metric: took 4.950127ms for pod "kube-apiserver-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:50.347146   52819 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:50.358213   52819 pod_ready.go:93] pod "kube-controller-manager-test-preload-937590" in "kube-system" namespace has status "Ready":"True"
	I0919 20:17:50.358238   52819 pod_ready.go:82] duration metric: took 11.086266ms for pod "kube-controller-manager-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:50.358251   52819 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l9j5z" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:50.364505   52819 pod_ready.go:93] pod "kube-proxy-l9j5z" in "kube-system" namespace has status "Ready":"True"
	I0919 20:17:50.364527   52819 pod_ready.go:82] duration metric: took 6.269561ms for pod "kube-proxy-l9j5z" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:50.364536   52819 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:50.370967   52819 pod_ready.go:93] pod "kube-scheduler-test-preload-937590" in "kube-system" namespace has status "Ready":"True"
	I0919 20:17:50.370990   52819 pod_ready.go:82] duration metric: took 6.448016ms for pod "kube-scheduler-test-preload-937590" in "kube-system" namespace to be "Ready" ...
	I0919 20:17:50.371002   52819 pod_ready.go:39] duration metric: took 4.553019513s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 20:17:50.371018   52819 api_server.go:52] waiting for apiserver process to appear ...
	I0919 20:17:50.371076   52819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 20:17:50.385263   52819 api_server.go:72] duration metric: took 11.771529825s to wait for apiserver process to appear ...
	I0919 20:17:50.385290   52819 api_server.go:88] waiting for apiserver healthz status ...
	I0919 20:17:50.385311   52819 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0919 20:17:50.391058   52819 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0919 20:17:50.391921   52819 api_server.go:141] control plane version: v1.24.4
	I0919 20:17:50.391939   52819 api_server.go:131] duration metric: took 6.642064ms to wait for apiserver health ...
	I0919 20:17:50.391948   52819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 20:17:50.540798   52819 system_pods.go:59] 7 kube-system pods found
	I0919 20:17:50.540825   52819 system_pods.go:61] "coredns-6d4b75cb6d-fcgqg" [dfe227fd-c850-4a06-80be-ce316e87c0fa] Running
	I0919 20:17:50.540829   52819 system_pods.go:61] "etcd-test-preload-937590" [c634d80c-8804-4f19-846c-c92fc1467083] Running
	I0919 20:17:50.540833   52819 system_pods.go:61] "kube-apiserver-test-preload-937590" [01e741ba-207c-4fec-93e6-b414f4feb1c9] Running
	I0919 20:17:50.540837   52819 system_pods.go:61] "kube-controller-manager-test-preload-937590" [caaa27eb-a604-467a-9bf2-e1bc56f4da96] Running
	I0919 20:17:50.540840   52819 system_pods.go:61] "kube-proxy-l9j5z" [9aedab61-52e6-4dd6-b1f5-abb9deee6d24] Running
	I0919 20:17:50.540843   52819 system_pods.go:61] "kube-scheduler-test-preload-937590" [240ec5c9-aa69-4664-80e4-0b87938d1ba0] Running
	I0919 20:17:50.540846   52819 system_pods.go:61] "storage-provisioner" [fff94643-ff74-4fe8-a9d5-b339bd0abe07] Running
	I0919 20:17:50.540852   52819 system_pods.go:74] duration metric: took 148.898075ms to wait for pod list to return data ...
	I0919 20:17:50.540858   52819 default_sa.go:34] waiting for default service account to be created ...
	I0919 20:17:50.738030   52819 default_sa.go:45] found service account: "default"
	I0919 20:17:50.738056   52819 default_sa.go:55] duration metric: took 197.192672ms for default service account to be created ...
	I0919 20:17:50.738064   52819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 20:17:50.941551   52819 system_pods.go:86] 7 kube-system pods found
	I0919 20:17:50.941580   52819 system_pods.go:89] "coredns-6d4b75cb6d-fcgqg" [dfe227fd-c850-4a06-80be-ce316e87c0fa] Running
	I0919 20:17:50.941585   52819 system_pods.go:89] "etcd-test-preload-937590" [c634d80c-8804-4f19-846c-c92fc1467083] Running
	I0919 20:17:50.941594   52819 system_pods.go:89] "kube-apiserver-test-preload-937590" [01e741ba-207c-4fec-93e6-b414f4feb1c9] Running
	I0919 20:17:50.941598   52819 system_pods.go:89] "kube-controller-manager-test-preload-937590" [caaa27eb-a604-467a-9bf2-e1bc56f4da96] Running
	I0919 20:17:50.941601   52819 system_pods.go:89] "kube-proxy-l9j5z" [9aedab61-52e6-4dd6-b1f5-abb9deee6d24] Running
	I0919 20:17:50.941604   52819 system_pods.go:89] "kube-scheduler-test-preload-937590" [240ec5c9-aa69-4664-80e4-0b87938d1ba0] Running
	I0919 20:17:50.941607   52819 system_pods.go:89] "storage-provisioner" [fff94643-ff74-4fe8-a9d5-b339bd0abe07] Running
	I0919 20:17:50.941613   52819 system_pods.go:126] duration metric: took 203.543987ms to wait for k8s-apps to be running ...
	I0919 20:17:50.941620   52819 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 20:17:50.941661   52819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 20:17:50.955687   52819 system_svc.go:56] duration metric: took 14.056433ms WaitForService to wait for kubelet
	I0919 20:17:50.955719   52819 kubeadm.go:582] duration metric: took 12.341988082s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 20:17:50.955741   52819 node_conditions.go:102] verifying NodePressure condition ...
	I0919 20:17:51.138734   52819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 20:17:51.138762   52819 node_conditions.go:123] node cpu capacity is 2
	I0919 20:17:51.138773   52819 node_conditions.go:105] duration metric: took 183.026903ms to run NodePressure ...
	I0919 20:17:51.138783   52819 start.go:241] waiting for startup goroutines ...
	I0919 20:17:51.138790   52819 start.go:246] waiting for cluster config update ...
	I0919 20:17:51.138799   52819 start.go:255] writing updated cluster config ...
	I0919 20:17:51.139046   52819 ssh_runner.go:195] Run: rm -f paused
	I0919 20:17:51.185031   52819 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0919 20:17:51.187386   52819 out.go:201] 
	W0919 20:17:51.188817   52819 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0919 20:17:51.190302   52819 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0919 20:17:51.191673   52819 out.go:177] * Done! kubectl is now configured to use "test-preload-937590" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.062631552Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777072062608096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31ea07b5-765e-491f-97d3-34775ed34b31 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.063199878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f4a382f-7880-4b6e-80ff-143a0df8a335 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.063276530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f4a382f-7880-4b6e-80ff-143a0df8a335 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.063441100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:279f52708f2ffafc9e221e0b8ecfe4cf9446ce0613e06541a78dfaaef573b55e,PodSandboxId:abc25864a00d7e51a24f0186bd7aa2903c63864794ee80539af7ba806913f967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726777064395082414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fcgqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe227fd-c850-4a06-80be-ce316e87c0fa,},Annotations:map[string]string{io.kubernetes.container.hash: d1c2a6ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c370480c30dd3e173263da11d95bf0dbfe3adcf97fe306bfca261d11035146e,PodSandboxId:a5f1914d2cf64b6b6fd631ee53e163dafe43b39a3b9db4b6c724b18df560b4a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726777057397908933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9j5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9aedab61-52e6-4dd6-b1f5-abb9deee6d24,},Annotations:map[string]string{io.kubernetes.container.hash: 7bbf4803,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b643e4dd5c2f383be572844a5aa56eb75bf8e6807688da3ef86687c92b540d,PodSandboxId:e2d16c08fb2c03f2dbaf099ab738f4fa437a6a8cb1f3543c3079b8548b383cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726777057395896306,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff
f94643-ff74-4fe8-a9d5-b339bd0abe07,},Annotations:map[string]string{io.kubernetes.container.hash: eea9dedd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8865b19800cf1089ba42d3304c919600623392fcf2109f107e5ccace7a13aacb,PodSandboxId:f41d642e2574b8b3c6e33c7933fd0401b1a4dc6663df220ff8c744ae5e245627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726777051336330694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 51702d96c7e466453ca5ed4c38cfa232,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57c280cad94551e84574a0f672b47a6ed92df6d739bb85ab20df482af676da7,PodSandboxId:ffec9c2c2873c7bc5e1d15416b542db157e2cce8831f7470decf14d9184a6c0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726777051066061582,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: bd2061012b7c78fb5252e5df0bfd7919,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8b315c9c09fd2c838f2ee30ef2b437ca03355de54db36d1c0574a75988f067,PodSandboxId:be042279b1beccd2fdb139df7d59b2a4640ac5cd8ea3d00537ac05f3f5a31b39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726777051069389753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c5
244352f041b322cd21acf9548197,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2444f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc20e8962f6ad517a526b579976a56b7cc6db6622f988c7d31a088071f3041b,PodSandboxId:dad7253d210e2b405d02f200ff9dd3bc76c2f12be0f0c74ed890b11b4fc80a0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726777051017219779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a75d6d97639639ef8a4306e8bc76d2f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 9094759e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f4a382f-7880-4b6e-80ff-143a0df8a335 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.098873105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e961f7b-a305-4f44-a233-bec5b000c86d name=/runtime.v1.RuntimeService/Version
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.098962560Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e961f7b-a305-4f44-a233-bec5b000c86d name=/runtime.v1.RuntimeService/Version
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.099880755Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1164c5a-d4a5-44e5-8f31-b5b5e71f4294 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.100296747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777072100273738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1164c5a-d4a5-44e5-8f31-b5b5e71f4294 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.100824731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eec571c5-4493-4465-ba91-f0aea4fafc75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.100875859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eec571c5-4493-4465-ba91-f0aea4fafc75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.101070171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:279f52708f2ffafc9e221e0b8ecfe4cf9446ce0613e06541a78dfaaef573b55e,PodSandboxId:abc25864a00d7e51a24f0186bd7aa2903c63864794ee80539af7ba806913f967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726777064395082414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fcgqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe227fd-c850-4a06-80be-ce316e87c0fa,},Annotations:map[string]string{io.kubernetes.container.hash: d1c2a6ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c370480c30dd3e173263da11d95bf0dbfe3adcf97fe306bfca261d11035146e,PodSandboxId:a5f1914d2cf64b6b6fd631ee53e163dafe43b39a3b9db4b6c724b18df560b4a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726777057397908933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9j5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9aedab61-52e6-4dd6-b1f5-abb9deee6d24,},Annotations:map[string]string{io.kubernetes.container.hash: 7bbf4803,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b643e4dd5c2f383be572844a5aa56eb75bf8e6807688da3ef86687c92b540d,PodSandboxId:e2d16c08fb2c03f2dbaf099ab738f4fa437a6a8cb1f3543c3079b8548b383cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726777057395896306,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff
f94643-ff74-4fe8-a9d5-b339bd0abe07,},Annotations:map[string]string{io.kubernetes.container.hash: eea9dedd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8865b19800cf1089ba42d3304c919600623392fcf2109f107e5ccace7a13aacb,PodSandboxId:f41d642e2574b8b3c6e33c7933fd0401b1a4dc6663df220ff8c744ae5e245627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726777051336330694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 51702d96c7e466453ca5ed4c38cfa232,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57c280cad94551e84574a0f672b47a6ed92df6d739bb85ab20df482af676da7,PodSandboxId:ffec9c2c2873c7bc5e1d15416b542db157e2cce8831f7470decf14d9184a6c0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726777051066061582,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: bd2061012b7c78fb5252e5df0bfd7919,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8b315c9c09fd2c838f2ee30ef2b437ca03355de54db36d1c0574a75988f067,PodSandboxId:be042279b1beccd2fdb139df7d59b2a4640ac5cd8ea3d00537ac05f3f5a31b39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726777051069389753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c5
244352f041b322cd21acf9548197,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2444f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc20e8962f6ad517a526b579976a56b7cc6db6622f988c7d31a088071f3041b,PodSandboxId:dad7253d210e2b405d02f200ff9dd3bc76c2f12be0f0c74ed890b11b4fc80a0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726777051017219779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a75d6d97639639ef8a4306e8bc76d2f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 9094759e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eec571c5-4493-4465-ba91-f0aea4fafc75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.137611488Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09ad4920-4ae1-48bd-9854-c552642db35f name=/runtime.v1.RuntimeService/Version
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.137688803Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09ad4920-4ae1-48bd-9854-c552642db35f name=/runtime.v1.RuntimeService/Version
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.139219170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44d856ed-c8fa-4c1e-abf0-87f5a1dd86a8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.139699817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777072139676593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44d856ed-c8fa-4c1e-abf0-87f5a1dd86a8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.140440236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=082aacad-7905-406b-85b2-009cc27935ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.140491358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=082aacad-7905-406b-85b2-009cc27935ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.140659838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:279f52708f2ffafc9e221e0b8ecfe4cf9446ce0613e06541a78dfaaef573b55e,PodSandboxId:abc25864a00d7e51a24f0186bd7aa2903c63864794ee80539af7ba806913f967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726777064395082414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fcgqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe227fd-c850-4a06-80be-ce316e87c0fa,},Annotations:map[string]string{io.kubernetes.container.hash: d1c2a6ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c370480c30dd3e173263da11d95bf0dbfe3adcf97fe306bfca261d11035146e,PodSandboxId:a5f1914d2cf64b6b6fd631ee53e163dafe43b39a3b9db4b6c724b18df560b4a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726777057397908933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9j5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9aedab61-52e6-4dd6-b1f5-abb9deee6d24,},Annotations:map[string]string{io.kubernetes.container.hash: 7bbf4803,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b643e4dd5c2f383be572844a5aa56eb75bf8e6807688da3ef86687c92b540d,PodSandboxId:e2d16c08fb2c03f2dbaf099ab738f4fa437a6a8cb1f3543c3079b8548b383cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726777057395896306,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff
f94643-ff74-4fe8-a9d5-b339bd0abe07,},Annotations:map[string]string{io.kubernetes.container.hash: eea9dedd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8865b19800cf1089ba42d3304c919600623392fcf2109f107e5ccace7a13aacb,PodSandboxId:f41d642e2574b8b3c6e33c7933fd0401b1a4dc6663df220ff8c744ae5e245627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726777051336330694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 51702d96c7e466453ca5ed4c38cfa232,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57c280cad94551e84574a0f672b47a6ed92df6d739bb85ab20df482af676da7,PodSandboxId:ffec9c2c2873c7bc5e1d15416b542db157e2cce8831f7470decf14d9184a6c0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726777051066061582,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: bd2061012b7c78fb5252e5df0bfd7919,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8b315c9c09fd2c838f2ee30ef2b437ca03355de54db36d1c0574a75988f067,PodSandboxId:be042279b1beccd2fdb139df7d59b2a4640ac5cd8ea3d00537ac05f3f5a31b39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726777051069389753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c5
244352f041b322cd21acf9548197,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2444f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc20e8962f6ad517a526b579976a56b7cc6db6622f988c7d31a088071f3041b,PodSandboxId:dad7253d210e2b405d02f200ff9dd3bc76c2f12be0f0c74ed890b11b4fc80a0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726777051017219779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a75d6d97639639ef8a4306e8bc76d2f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 9094759e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=082aacad-7905-406b-85b2-009cc27935ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.173602059Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94d36b38-31ed-4b17-bfd2-e75db7c51bc6 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.173671928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94d36b38-31ed-4b17-bfd2-e75db7c51bc6 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.174879883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9d5cb12-4a55-4b61-a29a-7662a419aa05 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.175407779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777072175381048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9d5cb12-4a55-4b61-a29a-7662a419aa05 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.175996108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebbb76b0-6b61-40ad-a5d5-643ffc06d332 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.176050632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebbb76b0-6b61-40ad-a5d5-643ffc06d332 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:17:52 test-preload-937590 crio[668]: time="2024-09-19 20:17:52.176197955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:279f52708f2ffafc9e221e0b8ecfe4cf9446ce0613e06541a78dfaaef573b55e,PodSandboxId:abc25864a00d7e51a24f0186bd7aa2903c63864794ee80539af7ba806913f967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726777064395082414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fcgqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe227fd-c850-4a06-80be-ce316e87c0fa,},Annotations:map[string]string{io.kubernetes.container.hash: d1c2a6ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c370480c30dd3e173263da11d95bf0dbfe3adcf97fe306bfca261d11035146e,PodSandboxId:a5f1914d2cf64b6b6fd631ee53e163dafe43b39a3b9db4b6c724b18df560b4a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726777057397908933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9j5z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9aedab61-52e6-4dd6-b1f5-abb9deee6d24,},Annotations:map[string]string{io.kubernetes.container.hash: 7bbf4803,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45b643e4dd5c2f383be572844a5aa56eb75bf8e6807688da3ef86687c92b540d,PodSandboxId:e2d16c08fb2c03f2dbaf099ab738f4fa437a6a8cb1f3543c3079b8548b383cbd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726777057395896306,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff
f94643-ff74-4fe8-a9d5-b339bd0abe07,},Annotations:map[string]string{io.kubernetes.container.hash: eea9dedd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8865b19800cf1089ba42d3304c919600623392fcf2109f107e5ccace7a13aacb,PodSandboxId:f41d642e2574b8b3c6e33c7933fd0401b1a4dc6663df220ff8c744ae5e245627,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726777051336330694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 51702d96c7e466453ca5ed4c38cfa232,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57c280cad94551e84574a0f672b47a6ed92df6d739bb85ab20df482af676da7,PodSandboxId:ffec9c2c2873c7bc5e1d15416b542db157e2cce8831f7470decf14d9184a6c0d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726777051066061582,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: bd2061012b7c78fb5252e5df0bfd7919,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8b315c9c09fd2c838f2ee30ef2b437ca03355de54db36d1c0574a75988f067,PodSandboxId:be042279b1beccd2fdb139df7d59b2a4640ac5cd8ea3d00537ac05f3f5a31b39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726777051069389753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1c5
244352f041b322cd21acf9548197,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2444f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc20e8962f6ad517a526b579976a56b7cc6db6622f988c7d31a088071f3041b,PodSandboxId:dad7253d210e2b405d02f200ff9dd3bc76c2f12be0f0c74ed890b11b4fc80a0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726777051017219779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-937590,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a75d6d97639639ef8a4306e8bc76d2f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 9094759e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebbb76b0-6b61-40ad-a5d5-643ffc06d332 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	279f52708f2ff       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   abc25864a00d7       coredns-6d4b75cb6d-fcgqg
	1c370480c30dd       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   a5f1914d2cf64       kube-proxy-l9j5z
	45b643e4dd5c2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   e2d16c08fb2c0       storage-provisioner
	8865b19800cf1       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   f41d642e2574b       kube-controller-manager-test-preload-937590
	6a8b315c9c09f       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   be042279b1bec       kube-apiserver-test-preload-937590
	e57c280cad945       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   ffec9c2c2873c       kube-scheduler-test-preload-937590
	cdc20e8962f6a       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   dad7253d210e2       etcd-test-preload-937590
	
	
	==> coredns [279f52708f2ffafc9e221e0b8ecfe4cf9446ce0613e06541a78dfaaef573b55e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:47629 - 29898 "HINFO IN 3370999548997746434.8680977757403213269. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014039144s
	
	
	==> describe nodes <==
	Name:               test-preload-937590
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-937590
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=test-preload-937590
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T20_16_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 20:16:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-937590
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 20:17:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 20:17:45 +0000   Thu, 19 Sep 2024 20:16:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 20:17:45 +0000   Thu, 19 Sep 2024 20:16:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 20:17:45 +0000   Thu, 19 Sep 2024 20:16:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 20:17:45 +0000   Thu, 19 Sep 2024 20:17:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    test-preload-937590
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5db7e3749465405684642b3494e7f03e
	  System UUID:                5db7e374-9465-4056-8464-2b3494e7f03e
	  Boot ID:                    a4949c8b-5a6f-4b5f-a8de-fa20be3851b9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-fcgqg                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     88s
	  kube-system                 etcd-test-preload-937590                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         103s
	  kube-system                 kube-apiserver-test-preload-937590             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-test-preload-937590    200m (10%)    0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-proxy-l9j5z                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-test-preload-937590             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 87s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  109s (x5 over 109s)  kubelet          Node test-preload-937590 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x4 over 109s)  kubelet          Node test-preload-937590 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x4 over 109s)  kubelet          Node test-preload-937590 status is now: NodeHasSufficientPID
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s                 kubelet          Node test-preload-937590 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s                 kubelet          Node test-preload-937590 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s                 kubelet          Node test-preload-937590 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                91s                  kubelet          Node test-preload-937590 status is now: NodeReady
	  Normal  RegisteredNode           89s                  node-controller  Node test-preload-937590 event: Registered Node test-preload-937590 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-937590 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-937590 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-937590 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-937590 event: Registered Node test-preload-937590 in Controller
	
	
	==> dmesg <==
	[Sep19 20:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049771] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040238] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep19 20:17] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.417266] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.598887] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.277606] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.061335] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054378] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.174247] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.117293] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.281196] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +12.859198] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.061067] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.539727] systemd-fstab-generator[1115]: Ignoring "noauto" option for root device
	[  +5.866674] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.665513] systemd-fstab-generator[1750]: Ignoring "noauto" option for root device
	[  +5.508383] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [cdc20e8962f6ad517a526b579976a56b7cc6db6622f988c7d31a088071f3041b] <==
	{"level":"info","ts":"2024-09-19T20:17:31.312Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"900c4b71f7b778f3","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-19T20:17:31.314Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-19T20:17:31.315Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"900c4b71f7b778f3","initial-advertise-peer-urls":["https://192.168.39.152:2380"],"listen-peer-urls":["https://192.168.39.152:2380"],"advertise-client-urls":["https://192.168.39.152:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.152:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-19T20:17:31.315Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-19T20:17:31.315Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-19T20:17:31.316Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.152:2380"}
	{"level":"info","ts":"2024-09-19T20:17:31.316Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.152:2380"}
	{"level":"info","ts":"2024-09-19T20:17:31.317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 switched to configuration voters=(10379754194041534707)"}
	{"level":"info","ts":"2024-09-19T20:17:31.317Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ce072c4559d5992c","local-member-id":"900c4b71f7b778f3","added-peer-id":"900c4b71f7b778f3","added-peer-peer-urls":["https://192.168.39.152:2380"]}
	{"level":"info","ts":"2024-09-19T20:17:31.317Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ce072c4559d5992c","local-member-id":"900c4b71f7b778f3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T20:17:31.317Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T20:17:32.896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-19T20:17:32.896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-19T20:17:32.896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 received MsgPreVoteResp from 900c4b71f7b778f3 at term 2"}
	{"level":"info","ts":"2024-09-19T20:17:32.896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became candidate at term 3"}
	{"level":"info","ts":"2024-09-19T20:17:32.896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 received MsgVoteResp from 900c4b71f7b778f3 at term 3"}
	{"level":"info","ts":"2024-09-19T20:17:32.896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became leader at term 3"}
	{"level":"info","ts":"2024-09-19T20:17:32.896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 900c4b71f7b778f3 elected leader 900c4b71f7b778f3 at term 3"}
	{"level":"info","ts":"2024-09-19T20:17:32.896Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"900c4b71f7b778f3","local-member-attributes":"{Name:test-preload-937590 ClientURLs:[https://192.168.39.152:2379]}","request-path":"/0/members/900c4b71f7b778f3/attributes","cluster-id":"ce072c4559d5992c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T20:17:32.897Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T20:17:32.898Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T20:17:32.899Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T20:17:32.899Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.152:2379"}
	{"level":"info","ts":"2024-09-19T20:17:32.900Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T20:17:32.900Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:17:52 up 0 min,  0 users,  load average: 0.76, 0.20, 0.07
	Linux test-preload-937590 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6a8b315c9c09fd2c838f2ee30ef2b437ca03355de54db36d1c0574a75988f067] <==
	I0919 20:17:35.347853       1 establishing_controller.go:76] Starting EstablishingController
	I0919 20:17:35.348738       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0919 20:17:35.348812       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0919 20:17:35.348887       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0919 20:17:35.371281       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0919 20:17:35.396308       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E0919 20:17:35.489374       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0919 20:17:35.500934       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0919 20:17:35.517419       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 20:17:35.517681       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0919 20:17:35.518219       1 cache.go:39] Caches are synced for autoregister controller
	I0919 20:17:35.521066       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 20:17:35.541920       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0919 20:17:35.543309       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 20:17:35.560260       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0919 20:17:36.022047       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0919 20:17:36.348983       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 20:17:37.098740       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0919 20:17:37.112557       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0919 20:17:37.160981       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0919 20:17:37.176920       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 20:17:37.183253       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 20:17:37.879040       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0919 20:17:48.347238       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 20:17:48.413102       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8865b19800cf1089ba42d3304c919600623392fcf2109f107e5ccace7a13aacb] <==
	I0919 20:17:48.363290       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0919 20:17:48.368238       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0919 20:17:48.369414       1 shared_informer.go:262] Caches are synced for PV protection
	I0919 20:17:48.372695       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0919 20:17:48.376554       1 shared_informer.go:262] Caches are synced for persistent volume
	I0919 20:17:48.379642       1 shared_informer.go:262] Caches are synced for crt configmap
	I0919 20:17:48.396674       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0919 20:17:48.436640       1 shared_informer.go:262] Caches are synced for attach detach
	I0919 20:17:48.457226       1 shared_informer.go:262] Caches are synced for taint
	I0919 20:17:48.457571       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0919 20:17:48.457912       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-937590. Assuming now as a timestamp.
	I0919 20:17:48.458015       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0919 20:17:48.459660       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0919 20:17:48.460928       1 event.go:294] "Event occurred" object="test-preload-937590" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-937590 event: Registered Node test-preload-937590 in Controller"
	I0919 20:17:48.466605       1 shared_informer.go:262] Caches are synced for daemon sets
	I0919 20:17:48.525215       1 shared_informer.go:262] Caches are synced for stateful set
	I0919 20:17:48.545554       1 shared_informer.go:262] Caches are synced for disruption
	I0919 20:17:48.545640       1 disruption.go:371] Sending events to api server.
	I0919 20:17:48.592817       1 shared_informer.go:262] Caches are synced for resource quota
	I0919 20:17:48.610134       1 shared_informer.go:262] Caches are synced for resource quota
	I0919 20:17:48.615551       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0919 20:17:48.622880       1 shared_informer.go:262] Caches are synced for deployment
	I0919 20:17:49.026968       1 shared_informer.go:262] Caches are synced for garbage collector
	I0919 20:17:49.027024       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0919 20:17:49.033158       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [1c370480c30dd3e173263da11d95bf0dbfe3adcf97fe306bfca261d11035146e] <==
	I0919 20:17:37.825811       1 node.go:163] Successfully retrieved node IP: 192.168.39.152
	I0919 20:17:37.826002       1 server_others.go:138] "Detected node IP" address="192.168.39.152"
	I0919 20:17:37.826061       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0919 20:17:37.866418       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0919 20:17:37.866435       1 server_others.go:206] "Using iptables Proxier"
	I0919 20:17:37.867297       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0919 20:17:37.868929       1 server.go:661] "Version info" version="v1.24.4"
	I0919 20:17:37.868978       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:17:37.871025       1 config.go:317] "Starting service config controller"
	I0919 20:17:37.871099       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0919 20:17:37.871136       1 config.go:226] "Starting endpoint slice config controller"
	I0919 20:17:37.871153       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0919 20:17:37.873562       1 config.go:444] "Starting node config controller"
	I0919 20:17:37.873604       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0919 20:17:37.972171       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0919 20:17:37.972202       1 shared_informer.go:262] Caches are synced for service config
	I0919 20:17:37.974367       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [e57c280cad94551e84574a0f672b47a6ed92df6d739bb85ab20df482af676da7] <==
	I0919 20:17:31.723862       1 serving.go:348] Generated self-signed cert in-memory
	W0919 20:17:35.384932       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 20:17:35.384971       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 20:17:35.385015       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 20:17:35.385042       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 20:17:35.473055       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0919 20:17:35.473091       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:17:35.486827       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 20:17:35.487304       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 20:17:35.488349       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:17:35.488418       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 20:17:35.589657       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.336220    1122 topology_manager.go:200] "Topology Admit Handler"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.336391    1122 topology_manager.go:200] "Topology Admit Handler"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.336502    1122 topology_manager.go:200] "Topology Admit Handler"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.336580    1122 topology_manager.go:200] "Topology Admit Handler"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: E0919 20:17:36.338420    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fcgqg" podUID=dfe227fd-c850-4a06-80be-ce316e87c0fa
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.414963    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcgfb\" (UniqueName: \"kubernetes.io/projected/fff94643-ff74-4fe8-a9d5-b339bd0abe07-kube-api-access-wcgfb\") pod \"storage-provisioner\" (UID: \"fff94643-ff74-4fe8-a9d5-b339bd0abe07\") " pod="kube-system/storage-provisioner"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.415005    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9aedab61-52e6-4dd6-b1f5-abb9deee6d24-kube-proxy\") pod \"kube-proxy-l9j5z\" (UID: \"9aedab61-52e6-4dd6-b1f5-abb9deee6d24\") " pod="kube-system/kube-proxy-l9j5z"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.415025    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkl6w\" (UniqueName: \"kubernetes.io/projected/9aedab61-52e6-4dd6-b1f5-abb9deee6d24-kube-api-access-dkl6w\") pod \"kube-proxy-l9j5z\" (UID: \"9aedab61-52e6-4dd6-b1f5-abb9deee6d24\") " pod="kube-system/kube-proxy-l9j5z"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.415045    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9aedab61-52e6-4dd6-b1f5-abb9deee6d24-xtables-lock\") pod \"kube-proxy-l9j5z\" (UID: \"9aedab61-52e6-4dd6-b1f5-abb9deee6d24\") " pod="kube-system/kube-proxy-l9j5z"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.415065    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fff94643-ff74-4fe8-a9d5-b339bd0abe07-tmp\") pod \"storage-provisioner\" (UID: \"fff94643-ff74-4fe8-a9d5-b339bd0abe07\") " pod="kube-system/storage-provisioner"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.415083    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9aedab61-52e6-4dd6-b1f5-abb9deee6d24-lib-modules\") pod \"kube-proxy-l9j5z\" (UID: \"9aedab61-52e6-4dd6-b1f5-abb9deee6d24\") " pod="kube-system/kube-proxy-l9j5z"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.415110    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfe227fd-c850-4a06-80be-ce316e87c0fa-config-volume\") pod \"coredns-6d4b75cb6d-fcgqg\" (UID: \"dfe227fd-c850-4a06-80be-ce316e87c0fa\") " pod="kube-system/coredns-6d4b75cb6d-fcgqg"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.415134    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9tvg\" (UniqueName: \"kubernetes.io/projected/dfe227fd-c850-4a06-80be-ce316e87c0fa-kube-api-access-z9tvg\") pod \"coredns-6d4b75cb6d-fcgqg\" (UID: \"dfe227fd-c850-4a06-80be-ce316e87c0fa\") " pod="kube-system/coredns-6d4b75cb6d-fcgqg"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: I0919 20:17:36.415144    1122 reconciler.go:159] "Reconciler: start to sync state"
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: E0919 20:17:36.521490    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 20:17:36 test-preload-937590 kubelet[1122]: E0919 20:17:36.521600    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/dfe227fd-c850-4a06-80be-ce316e87c0fa-config-volume podName:dfe227fd-c850-4a06-80be-ce316e87c0fa nodeName:}" failed. No retries permitted until 2024-09-19 20:17:37.021566798 +0000 UTC m=+6.798864626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dfe227fd-c850-4a06-80be-ce316e87c0fa-config-volume") pod "coredns-6d4b75cb6d-fcgqg" (UID: "dfe227fd-c850-4a06-80be-ce316e87c0fa") : object "kube-system"/"coredns" not registered
	Sep 19 20:17:37 test-preload-937590 kubelet[1122]: E0919 20:17:37.024983    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 20:17:37 test-preload-937590 kubelet[1122]: E0919 20:17:37.025054    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/dfe227fd-c850-4a06-80be-ce316e87c0fa-config-volume podName:dfe227fd-c850-4a06-80be-ce316e87c0fa nodeName:}" failed. No retries permitted until 2024-09-19 20:17:38.025039762 +0000 UTC m=+7.802337576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dfe227fd-c850-4a06-80be-ce316e87c0fa-config-volume") pod "coredns-6d4b75cb6d-fcgqg" (UID: "dfe227fd-c850-4a06-80be-ce316e87c0fa") : object "kube-system"/"coredns" not registered
	Sep 19 20:17:37 test-preload-937590 kubelet[1122]: E0919 20:17:37.456993    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fcgqg" podUID=dfe227fd-c850-4a06-80be-ce316e87c0fa
	Sep 19 20:17:38 test-preload-937590 kubelet[1122]: E0919 20:17:38.033948    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 20:17:38 test-preload-937590 kubelet[1122]: E0919 20:17:38.034018    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/dfe227fd-c850-4a06-80be-ce316e87c0fa-config-volume podName:dfe227fd-c850-4a06-80be-ce316e87c0fa nodeName:}" failed. No retries permitted until 2024-09-19 20:17:40.034003871 +0000 UTC m=+9.811301685 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dfe227fd-c850-4a06-80be-ce316e87c0fa-config-volume") pod "coredns-6d4b75cb6d-fcgqg" (UID: "dfe227fd-c850-4a06-80be-ce316e87c0fa") : object "kube-system"/"coredns" not registered
	Sep 19 20:17:38 test-preload-937590 kubelet[1122]: I0919 20:17:38.463035    1122 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d1f6fb7e-ffa3-4281-bd9f-32642f984a02 path="/var/lib/kubelet/pods/d1f6fb7e-ffa3-4281-bd9f-32642f984a02/volumes"
	Sep 19 20:17:39 test-preload-937590 kubelet[1122]: E0919 20:17:39.456618    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fcgqg" podUID=dfe227fd-c850-4a06-80be-ce316e87c0fa
	Sep 19 20:17:40 test-preload-937590 kubelet[1122]: E0919 20:17:40.051832    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 20:17:40 test-preload-937590 kubelet[1122]: E0919 20:17:40.051939    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/dfe227fd-c850-4a06-80be-ce316e87c0fa-config-volume podName:dfe227fd-c850-4a06-80be-ce316e87c0fa nodeName:}" failed. No retries permitted until 2024-09-19 20:17:44.051919366 +0000 UTC m=+13.829217180 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dfe227fd-c850-4a06-80be-ce316e87c0fa-config-volume") pod "coredns-6d4b75cb6d-fcgqg" (UID: "dfe227fd-c850-4a06-80be-ce316e87c0fa") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [45b643e4dd5c2f383be572844a5aa56eb75bf8e6807688da3ef86687c92b540d] <==
	I0919 20:17:37.591682       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-937590 -n test-preload-937590
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-937590 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-937590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-937590
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-937590: (1.1032135s)
--- FAIL: TestPreload (178.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-670672 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-670672 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.230663419s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-670672] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-670672" primary control-plane node in "pause-670672" cluster
	* Updating the running kvm2 "pause-670672" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-670672" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 20:25:04.422385   61318 out.go:345] Setting OutFile to fd 1 ...
	I0919 20:25:04.422641   61318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:25:04.422651   61318 out.go:358] Setting ErrFile to fd 2...
	I0919 20:25:04.422655   61318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:25:04.422824   61318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 20:25:04.423382   61318 out.go:352] Setting JSON to false
	I0919 20:25:04.424293   61318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7648,"bootTime":1726769856,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 20:25:04.424385   61318 start.go:139] virtualization: kvm guest
	I0919 20:25:04.427012   61318 out.go:177] * [pause-670672] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 20:25:04.428761   61318 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 20:25:04.428765   61318 notify.go:220] Checking for updates...
	I0919 20:25:04.431311   61318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 20:25:04.432461   61318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 20:25:04.434066   61318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 20:25:04.435348   61318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 20:25:04.436678   61318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 20:25:04.438367   61318 config.go:182] Loaded profile config "pause-670672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 20:25:04.438921   61318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:25:04.438975   61318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:25:04.454369   61318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35347
	I0919 20:25:04.454879   61318 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:25:04.455432   61318 main.go:141] libmachine: Using API Version  1
	I0919 20:25:04.455460   61318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:25:04.455834   61318 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:25:04.456047   61318 main.go:141] libmachine: (pause-670672) Calling .DriverName
	I0919 20:25:04.456294   61318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 20:25:04.456631   61318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:25:04.456684   61318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:25:04.471618   61318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
	I0919 20:25:04.472064   61318 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:25:04.472681   61318 main.go:141] libmachine: Using API Version  1
	I0919 20:25:04.472705   61318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:25:04.473081   61318 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:25:04.473311   61318 main.go:141] libmachine: (pause-670672) Calling .DriverName
	I0919 20:25:04.509463   61318 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 20:25:04.510717   61318 start.go:297] selected driver: kvm2
	I0919 20:25:04.510738   61318 start.go:901] validating driver "kvm2" against &{Name:pause-670672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-670672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:25:04.510907   61318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 20:25:04.511357   61318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 20:25:04.511457   61318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 20:25:04.527352   61318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 20:25:04.528166   61318 cni.go:84] Creating CNI manager for ""
	I0919 20:25:04.528223   61318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 20:25:04.528274   61318 start.go:340] cluster config:
	{Name:pause-670672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-670672 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:25:04.528431   61318 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 20:25:04.531661   61318 out.go:177] * Starting "pause-670672" primary control-plane node in "pause-670672" cluster
	I0919 20:25:04.533126   61318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 20:25:04.533182   61318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 20:25:04.533208   61318 cache.go:56] Caching tarball of preloaded images
	I0919 20:25:04.533321   61318 preload.go:172] Found /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 20:25:04.533334   61318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 20:25:04.533503   61318 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/config.json ...
	I0919 20:25:04.533745   61318 start.go:360] acquireMachinesLock for pause-670672: {Name:mk2a40003a4c9ebef4e890988a9618a90b7115bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 20:25:04.533799   61318 start.go:364] duration metric: took 30.946µs to acquireMachinesLock for "pause-670672"
	I0919 20:25:04.533819   61318 start.go:96] Skipping create...Using existing machine configuration
	I0919 20:25:04.533825   61318 fix.go:54] fixHost starting: 
	I0919 20:25:04.534182   61318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:25:04.534224   61318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:25:04.549919   61318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I0919 20:25:04.550482   61318 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:25:04.550955   61318 main.go:141] libmachine: Using API Version  1
	I0919 20:25:04.550981   61318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:25:04.551454   61318 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:25:04.551643   61318 main.go:141] libmachine: (pause-670672) Calling .DriverName
	I0919 20:25:04.551824   61318 main.go:141] libmachine: (pause-670672) Calling .GetState
	I0919 20:25:04.553593   61318 fix.go:112] recreateIfNeeded on pause-670672: state=Running err=<nil>
	W0919 20:25:04.553666   61318 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 20:25:04.555089   61318 out.go:177] * Updating the running kvm2 "pause-670672" VM ...
	I0919 20:25:04.556737   61318 machine.go:93] provisionDockerMachine start ...
	I0919 20:25:04.556762   61318 main.go:141] libmachine: (pause-670672) Calling .DriverName
	I0919 20:25:04.556987   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHHostname
	I0919 20:25:04.560060   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:04.560556   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:04.560583   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:04.560825   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHPort
	I0919 20:25:04.561158   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:04.561422   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:04.561566   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHUsername
	I0919 20:25:04.561745   61318 main.go:141] libmachine: Using SSH client type: native
	I0919 20:25:04.561946   61318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0919 20:25:04.561967   61318 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 20:25:04.671445   61318 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-670672
	
	I0919 20:25:04.671490   61318 main.go:141] libmachine: (pause-670672) Calling .GetMachineName
	I0919 20:25:04.672518   61318 buildroot.go:166] provisioning hostname "pause-670672"
	I0919 20:25:04.672548   61318 main.go:141] libmachine: (pause-670672) Calling .GetMachineName
	I0919 20:25:04.672727   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHHostname
	I0919 20:25:04.675884   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:04.676246   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:04.676270   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:04.676454   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHPort
	I0919 20:25:04.676606   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:04.676744   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:04.676828   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHUsername
	I0919 20:25:04.676991   61318 main.go:141] libmachine: Using SSH client type: native
	I0919 20:25:04.677254   61318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0919 20:25:04.677272   61318 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-670672 && echo "pause-670672" | sudo tee /etc/hostname
	I0919 20:25:04.807111   61318 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-670672
	
	I0919 20:25:04.807144   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHHostname
	I0919 20:25:04.810337   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:04.810736   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:04.810766   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:04.810952   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHPort
	I0919 20:25:04.811161   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:04.811336   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:04.811509   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHUsername
	I0919 20:25:04.811692   61318 main.go:141] libmachine: Using SSH client type: native
	I0919 20:25:04.811936   61318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0919 20:25:04.811967   61318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-670672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-670672/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-670672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 20:25:04.926584   61318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 20:25:04.926610   61318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19664-7917/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-7917/.minikube}
	I0919 20:25:04.926634   61318 buildroot.go:174] setting up certificates
	I0919 20:25:04.926646   61318 provision.go:84] configureAuth start
	I0919 20:25:04.926660   61318 main.go:141] libmachine: (pause-670672) Calling .GetMachineName
	I0919 20:25:04.926940   61318 main.go:141] libmachine: (pause-670672) Calling .GetIP
	I0919 20:25:04.929648   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:04.930023   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:04.930048   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:04.930254   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHHostname
	I0919 20:25:04.932646   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:04.932979   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:04.933007   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:04.933247   61318 provision.go:143] copyHostCerts
	I0919 20:25:04.933326   61318 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem, removing ...
	I0919 20:25:04.933338   61318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem
	I0919 20:25:04.933414   61318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/ca.pem (1078 bytes)
	I0919 20:25:04.933509   61318 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem, removing ...
	I0919 20:25:04.933517   61318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem
	I0919 20:25:04.933540   61318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/cert.pem (1123 bytes)
	I0919 20:25:04.933604   61318 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem, removing ...
	I0919 20:25:04.933611   61318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem
	I0919 20:25:04.933632   61318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-7917/.minikube/key.pem (1679 bytes)
	I0919 20:25:04.933688   61318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem org=jenkins.pause-670672 san=[127.0.0.1 192.168.39.136 localhost minikube pause-670672]
	I0919 20:25:05.053810   61318 provision.go:177] copyRemoteCerts
	I0919 20:25:05.053871   61318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 20:25:05.053892   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHHostname
	I0919 20:25:05.056752   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:05.057319   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:05.057355   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:05.057536   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHPort
	I0919 20:25:05.057729   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:05.057890   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHUsername
	I0919 20:25:05.058017   61318 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/pause-670672/id_rsa Username:docker}
	I0919 20:25:05.141543   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 20:25:05.173473   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 20:25:05.200739   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 20:25:05.226088   61318 provision.go:87] duration metric: took 299.42564ms to configureAuth
	I0919 20:25:05.226121   61318 buildroot.go:189] setting minikube options for container-runtime
	I0919 20:25:05.226385   61318 config.go:182] Loaded profile config "pause-670672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 20:25:05.226476   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHHostname
	I0919 20:25:05.229620   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:05.230050   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:05.230087   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:05.230407   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHPort
	I0919 20:25:05.230609   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:05.230782   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:05.230950   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHUsername
	I0919 20:25:05.231127   61318 main.go:141] libmachine: Using SSH client type: native
	I0919 20:25:05.231383   61318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0919 20:25:05.231407   61318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 20:25:10.751141   61318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 20:25:10.751166   61318 machine.go:96] duration metric: took 6.194413514s to provisionDockerMachine
	I0919 20:25:10.751177   61318 start.go:293] postStartSetup for "pause-670672" (driver="kvm2")
	I0919 20:25:10.751187   61318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 20:25:10.751204   61318 main.go:141] libmachine: (pause-670672) Calling .DriverName
	I0919 20:25:10.751533   61318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 20:25:10.751566   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHHostname
	I0919 20:25:10.754534   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:10.754885   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:10.754919   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:10.755071   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHPort
	I0919 20:25:10.755250   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:10.755417   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHUsername
	I0919 20:25:10.755537   61318 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/pause-670672/id_rsa Username:docker}
	I0919 20:25:10.836002   61318 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 20:25:10.840761   61318 info.go:137] Remote host: Buildroot 2023.02.9
	I0919 20:25:10.840787   61318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/addons for local assets ...
	I0919 20:25:10.840855   61318 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-7917/.minikube/files for local assets ...
	I0919 20:25:10.840928   61318 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem -> 151162.pem in /etc/ssl/certs
	I0919 20:25:10.841008   61318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 20:25:10.850911   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /etc/ssl/certs/151162.pem (1708 bytes)
	I0919 20:25:10.875636   61318 start.go:296] duration metric: took 124.446648ms for postStartSetup
	I0919 20:25:10.875671   61318 fix.go:56] duration metric: took 6.341846279s for fixHost
	I0919 20:25:10.875689   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHHostname
	I0919 20:25:10.878483   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:10.878818   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:10.878846   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:10.878974   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHPort
	I0919 20:25:10.879174   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:10.879352   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:10.879505   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHUsername
	I0919 20:25:10.879673   61318 main.go:141] libmachine: Using SSH client type: native
	I0919 20:25:10.879857   61318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0919 20:25:10.879879   61318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 20:25:10.982154   61318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726777510.962578296
	
	I0919 20:25:10.982190   61318 fix.go:216] guest clock: 1726777510.962578296
	I0919 20:25:10.982200   61318 fix.go:229] Guest: 2024-09-19 20:25:10.962578296 +0000 UTC Remote: 2024-09-19 20:25:10.875674724 +0000 UTC m=+6.494684534 (delta=86.903572ms)
	I0919 20:25:10.982264   61318 fix.go:200] guest clock delta is within tolerance: 86.903572ms
	I0919 20:25:10.982275   61318 start.go:83] releasing machines lock for "pause-670672", held for 6.448463463s
	I0919 20:25:10.982305   61318 main.go:141] libmachine: (pause-670672) Calling .DriverName
	I0919 20:25:10.982601   61318 main.go:141] libmachine: (pause-670672) Calling .GetIP
	I0919 20:25:10.985851   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:10.986337   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:10.986365   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:10.986629   61318 main.go:141] libmachine: (pause-670672) Calling .DriverName
	I0919 20:25:10.987240   61318 main.go:141] libmachine: (pause-670672) Calling .DriverName
	I0919 20:25:10.987460   61318 main.go:141] libmachine: (pause-670672) Calling .DriverName
	I0919 20:25:10.987584   61318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 20:25:10.987624   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHHostname
	I0919 20:25:10.987656   61318 ssh_runner.go:195] Run: cat /version.json
	I0919 20:25:10.987676   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHHostname
	I0919 20:25:10.990804   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:10.991055   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:10.991248   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:10.991278   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:10.991388   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:10.991414   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:10.991606   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHPort
	I0919 20:25:10.991614   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHPort
	I0919 20:25:10.991766   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:10.991785   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHKeyPath
	I0919 20:25:10.991968   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHUsername
	I0919 20:25:10.991972   61318 main.go:141] libmachine: (pause-670672) Calling .GetSSHUsername
	I0919 20:25:10.992171   61318 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/pause-670672/id_rsa Username:docker}
	I0919 20:25:10.992169   61318 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/pause-670672/id_rsa Username:docker}
	I0919 20:25:11.075995   61318 ssh_runner.go:195] Run: systemctl --version
	I0919 20:25:11.098514   61318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 20:25:11.256223   61318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 20:25:11.262801   61318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 20:25:11.262872   61318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 20:25:11.275963   61318 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 20:25:11.275988   61318 start.go:495] detecting cgroup driver to use...
	I0919 20:25:11.276076   61318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 20:25:11.297421   61318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 20:25:11.313143   61318 docker.go:217] disabling cri-docker service (if available) ...
	I0919 20:25:11.313214   61318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 20:25:11.332624   61318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 20:25:11.349738   61318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 20:25:11.491365   61318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 20:25:11.628136   61318 docker.go:233] disabling docker service ...
	I0919 20:25:11.628218   61318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 20:25:11.647255   61318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 20:25:11.665674   61318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 20:25:11.813967   61318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 20:25:11.973367   61318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 20:25:11.988485   61318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 20:25:12.008332   61318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 20:25:12.008387   61318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:25:12.021204   61318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 20:25:12.021270   61318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:25:12.032451   61318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:25:12.043555   61318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:25:12.054318   61318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 20:25:12.069972   61318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:25:12.084496   61318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:25:12.100987   61318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 20:25:12.112133   61318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 20:25:12.127990   61318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 20:25:12.144090   61318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:25:12.286257   61318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 20:25:12.503445   61318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 20:25:12.503521   61318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 20:25:12.508677   61318 start.go:563] Will wait 60s for crictl version
	I0919 20:25:12.508739   61318 ssh_runner.go:195] Run: which crictl
	I0919 20:25:12.512946   61318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 20:25:12.552362   61318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 20:25:12.552452   61318 ssh_runner.go:195] Run: crio --version
	I0919 20:25:12.580854   61318 ssh_runner.go:195] Run: crio --version
	I0919 20:25:12.612980   61318 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0919 20:25:12.614568   61318 main.go:141] libmachine: (pause-670672) Calling .GetIP
	I0919 20:25:12.617799   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:12.618184   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:12.618208   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:12.618418   61318 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 20:25:12.623293   61318 kubeadm.go:883] updating cluster {Name:pause-670672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-670672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 20:25:12.623415   61318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 20:25:12.623456   61318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:25:12.677493   61318 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 20:25:12.677518   61318 crio.go:433] Images already preloaded, skipping extraction
	I0919 20:25:12.677566   61318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:25:12.715420   61318 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 20:25:12.715444   61318 cache_images.go:84] Images are preloaded, skipping loading
	I0919 20:25:12.715454   61318 kubeadm.go:934] updating node { 192.168.39.136 8443 v1.31.1 crio true true} ...
	I0919 20:25:12.715565   61318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-670672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-670672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 20:25:12.715628   61318 ssh_runner.go:195] Run: crio config
	I0919 20:25:12.766799   61318 cni.go:84] Creating CNI manager for ""
	I0919 20:25:12.766818   61318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 20:25:12.766827   61318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 20:25:12.766848   61318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-670672 NodeName:pause-670672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 20:25:12.766976   61318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-670672"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 20:25:12.767031   61318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 20:25:12.777436   61318 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 20:25:12.777512   61318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 20:25:12.787300   61318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 20:25:12.808085   61318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 20:25:12.830467   61318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0919 20:25:12.853427   61318 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I0919 20:25:12.857604   61318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:25:13.004416   61318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 20:25:13.020112   61318 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672 for IP: 192.168.39.136
	I0919 20:25:13.020140   61318 certs.go:194] generating shared ca certs ...
	I0919 20:25:13.020159   61318 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 20:25:13.020353   61318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 20:25:13.020414   61318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 20:25:13.020429   61318 certs.go:256] generating profile certs ...
	I0919 20:25:13.020514   61318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/client.key
	I0919 20:25:13.020572   61318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/apiserver.key.cb6f5b4b
	I0919 20:25:13.020605   61318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/proxy-client.key
	I0919 20:25:13.020724   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 20:25:13.020765   61318 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 20:25:13.020775   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 20:25:13.020798   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 20:25:13.020821   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 20:25:13.020843   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 20:25:13.020880   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 20:25:13.021568   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 20:25:13.050492   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 20:25:13.077047   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 20:25:13.107924   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 20:25:13.132521   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 20:25:13.163359   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 20:25:13.189997   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 20:25:13.217658   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 20:25:13.243146   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 20:25:13.268846   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 20:25:13.304781   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 20:25:13.331376   61318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 20:25:13.351157   61318 ssh_runner.go:195] Run: openssl version
	I0919 20:25:13.357672   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 20:25:13.369679   61318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:25:13.374748   61318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:25:13.374810   61318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:25:13.380688   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 20:25:13.389974   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 20:25:13.400588   61318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 20:25:13.405089   61318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 20:25:13.405152   61318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 20:25:13.411128   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 20:25:13.421435   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 20:25:13.434234   61318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 20:25:13.438796   61318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 20:25:13.438863   61318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 20:25:13.445690   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 20:25:13.455847   61318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 20:25:13.460864   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 20:25:13.466459   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 20:25:13.476218   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 20:25:13.495704   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 20:25:13.547017   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 20:25:13.580141   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 20:25:13.601459   61318 kubeadm.go:392] StartCluster: {Name:pause-670672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-670672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:25:13.601570   61318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 20:25:13.601619   61318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 20:25:13.810200   61318 cri.go:89] found id: "b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d"
	I0919 20:25:13.810229   61318 cri.go:89] found id: "4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3"
	I0919 20:25:13.810235   61318 cri.go:89] found id: "01c6b4268b2db0b252c7cecedb98a4fca95853acf13f51176e93d558eaeddc90"
	I0919 20:25:13.810241   61318 cri.go:89] found id: "03a58205b86bf31a7741f4dfc3a5772e5ea2d288faa422997f3dd29f41c6881e"
	I0919 20:25:13.810245   61318 cri.go:89] found id: "a114fcf124488bd89136755c5b47fa6c412ac3d6cdbc4ab73481696b685c248b"
	I0919 20:25:13.810250   61318 cri.go:89] found id: "3d8f8663344aef94598951fa7b68c35231b00bfae6325f6ebc3d9e38848e2c1e"
	I0919 20:25:13.810253   61318 cri.go:89] found id: ""
	I0919 20:25:13.810308   61318 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-670672 -n pause-670672
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-670672 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-670672 logs -n 25: (1.568021584s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-801740 sudo find         | cilium-801740             | jenkins | v1.34.0 | 19 Sep 24 20:20 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p cilium-801740 sudo crio         | cilium-801740             | jenkins | v1.34.0 | 19 Sep 24 20:20 UTC |                     |
	|         | config                             |                           |         |         |                     |                     |
	| delete  | -p cilium-801740                   | cilium-801740             | jenkins | v1.34.0 | 19 Sep 24 20:20 UTC | 19 Sep 24 20:20 UTC |
	| start   | -p cert-expiration-478436          | cert-expiration-478436    | jenkins | v1.34.0 | 19 Sep 24 20:20 UTC | 19 Sep 24 20:22 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:21 UTC | 19 Sep 24 20:22 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-011213             | offline-crio-011213       | jenkins | v1.34.0 | 19 Sep 24 20:21 UTC | 19 Sep 24 20:21 UTC |
	| start   | -p force-systemd-flag-013710       | force-systemd-flag-013710 | jenkins | v1.34.0 | 19 Sep 24 20:21 UTC | 19 Sep 24 20:22 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:22 UTC |
	| start   | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:22 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-070299          | running-upgrade-070299    | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:23 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-013710 ssh cat  | force-systemd-flag-013710 | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:22 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-013710       | force-systemd-flag-013710 | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:22 UTC |
	| start   | -p kubernetes-upgrade-342125       | kubernetes-upgrade-342125 | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-045748 sudo        | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:22 UTC |
	| start   | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:23 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-045748 sudo        | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:23 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:23 UTC | 19 Sep 24 20:23 UTC |
	| start   | -p pause-670672 --memory=2048      | pause-670672              | jenkins | v1.34.0 | 19 Sep 24 20:23 UTC | 19 Sep 24 20:25 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-070299          | running-upgrade-070299    | jenkins | v1.34.0 | 19 Sep 24 20:23 UTC | 19 Sep 24 20:23 UTC |
	| start   | -p stopped-upgrade-927381          | minikube                  | jenkins | v1.26.0 | 19 Sep 24 20:23 UTC | 19 Sep 24 20:25 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p pause-670672                    | pause-670672              | jenkins | v1.34.0 | 19 Sep 24 20:25 UTC | 19 Sep 24 20:25 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-927381 stop        | minikube                  | jenkins | v1.26.0 | 19 Sep 24 20:25 UTC | 19 Sep 24 20:25 UTC |
	| start   | -p stopped-upgrade-927381          | stopped-upgrade-927381    | jenkins | v1.34.0 | 19 Sep 24 20:25 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-478436          | cert-expiration-478436    | jenkins | v1.34.0 | 19 Sep 24 20:25 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h            |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 20:25:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 20:25:14.361224   61525 out.go:345] Setting OutFile to fd 1 ...
	I0919 20:25:14.361358   61525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:25:14.361363   61525 out.go:358] Setting ErrFile to fd 2...
	I0919 20:25:14.361368   61525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:25:14.361654   61525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 20:25:14.362381   61525 out.go:352] Setting JSON to false
	I0919 20:25:14.363639   61525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7658,"bootTime":1726769856,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 20:25:14.363747   61525 start.go:139] virtualization: kvm guest
	I0919 20:25:14.366140   61525 out.go:177] * [cert-expiration-478436] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 20:25:14.367936   61525 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 20:25:14.367956   61525 notify.go:220] Checking for updates...
	I0919 20:25:14.371084   61525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 20:25:14.372612   61525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 20:25:14.374004   61525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 20:25:14.375363   61525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 20:25:14.376563   61525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 20:25:12.614568   61318 main.go:141] libmachine: (pause-670672) Calling .GetIP
	I0919 20:25:12.617799   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:12.618184   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:12.618208   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:12.618418   61318 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 20:25:12.623293   61318 kubeadm.go:883] updating cluster {Name:pause-670672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-670672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 20:25:12.623415   61318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 20:25:12.623456   61318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:25:12.677493   61318 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 20:25:12.677518   61318 crio.go:433] Images already preloaded, skipping extraction
	I0919 20:25:12.677566   61318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:25:12.715420   61318 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 20:25:12.715444   61318 cache_images.go:84] Images are preloaded, skipping loading
	I0919 20:25:12.715454   61318 kubeadm.go:934] updating node { 192.168.39.136 8443 v1.31.1 crio true true} ...
	I0919 20:25:12.715565   61318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-670672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-670672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 20:25:12.715628   61318 ssh_runner.go:195] Run: crio config
	I0919 20:25:12.766799   61318 cni.go:84] Creating CNI manager for ""
	I0919 20:25:12.766818   61318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 20:25:12.766827   61318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 20:25:12.766848   61318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-670672 NodeName:pause-670672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 20:25:12.766976   61318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-670672"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 20:25:12.767031   61318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 20:25:12.777436   61318 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 20:25:12.777512   61318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 20:25:12.787300   61318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 20:25:12.808085   61318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 20:25:12.830467   61318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0919 20:25:12.853427   61318 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I0919 20:25:12.857604   61318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:25:13.004416   61318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 20:25:13.020112   61318 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672 for IP: 192.168.39.136
	I0919 20:25:13.020140   61318 certs.go:194] generating shared ca certs ...
	I0919 20:25:13.020159   61318 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 20:25:13.020353   61318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 20:25:13.020414   61318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 20:25:13.020429   61318 certs.go:256] generating profile certs ...
	I0919 20:25:13.020514   61318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/client.key
	I0919 20:25:13.020572   61318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/apiserver.key.cb6f5b4b
	I0919 20:25:13.020605   61318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/proxy-client.key
	I0919 20:25:13.020724   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 20:25:13.020765   61318 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 20:25:13.020775   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 20:25:13.020798   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 20:25:13.020821   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 20:25:13.020843   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 20:25:13.020880   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 20:25:13.021568   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 20:25:13.050492   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 20:25:13.077047   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 20:25:13.107924   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 20:25:13.132521   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 20:25:13.163359   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 20:25:13.189997   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 20:25:13.217658   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 20:25:13.243146   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 20:25:13.268846   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 20:25:13.304781   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 20:25:13.331376   61318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 20:25:13.351157   61318 ssh_runner.go:195] Run: openssl version
	I0919 20:25:13.357672   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 20:25:13.369679   61318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:25:13.374748   61318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:25:13.374810   61318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:25:13.380688   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 20:25:13.389974   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 20:25:13.400588   61318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 20:25:13.405089   61318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 20:25:13.405152   61318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 20:25:13.411128   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 20:25:13.421435   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 20:25:13.434234   61318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 20:25:13.438796   61318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 20:25:13.438863   61318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 20:25:13.445690   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 20:25:13.455847   61318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 20:25:13.460864   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 20:25:13.466459   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 20:25:13.476218   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 20:25:13.495704   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 20:25:13.547017   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 20:25:13.580141   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 20:25:13.601459   61318 kubeadm.go:392] StartCluster: {Name:pause-670672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-670672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:25:13.601570   61318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 20:25:13.601619   61318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 20:25:13.810200   61318 cri.go:89] found id: "b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d"
	I0919 20:25:13.810229   61318 cri.go:89] found id: "4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3"
	I0919 20:25:13.810235   61318 cri.go:89] found id: "01c6b4268b2db0b252c7cecedb98a4fca95853acf13f51176e93d558eaeddc90"
	I0919 20:25:13.810241   61318 cri.go:89] found id: "03a58205b86bf31a7741f4dfc3a5772e5ea2d288faa422997f3dd29f41c6881e"
	I0919 20:25:13.810245   61318 cri.go:89] found id: "a114fcf124488bd89136755c5b47fa6c412ac3d6cdbc4ab73481696b685c248b"
	I0919 20:25:13.810250   61318 cri.go:89] found id: "3d8f8663344aef94598951fa7b68c35231b00bfae6325f6ebc3d9e38848e2c1e"
	I0919 20:25:13.810253   61318 cri.go:89] found id: ""
	I0919 20:25:13.810308   61318 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.342358986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777544342325467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e2d2892-14a0-4e34-8947-7d2b6586e848 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.343067571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe7d4ce0-eede-4cba-baa0-35c84659f1a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.343328203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe7d4ce0-eede-4cba-baa0-35c84659f1a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.343733288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de8a09a54a42c156151203cf80a494b94bef7c73fae0a05bb5688ce9b28ca67c,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726777527204269816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e80bcabc5ca3d9d4349b250687457360e4de3f5dd89703acb3893b93321e09f,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726777527174258507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e843ffd50bd660fb7e78baa93b67b7746b8af6d27b94c18b6496f0e90b9155f,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726777527193913824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d97dcc85cdba7cab90e81240c278a75d7fc25d02b77a2b074daf3d47a45621,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726777527184981401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380634a5fb6fe0144cdd4083f6dd943bf7959be1b5b600b745ddf354e0ef297b,PodSandboxId:9266d27c0ea5274dda2248e6e4a217a8a1ce2ed38fcc186bc2a015898d880329,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726777514754281739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52410b8c9de7dbf380fcd01b1db2956cdadda87967c76b041ae0b4e706b42650,PodSandboxId:0af544a08520eaf859764a25488be6287e7f26ccab73dcc333b4e2adf216db6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726777514053779153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726777513973523845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726777514005950750,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726777513895313077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726777513844638755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d,PodSandboxId:ddd45b63ed24668d280fde66288d11234dab7e841444d8c7d4b9b2d2dbfc653f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726777469349246386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3,PodSandboxId:7d0fe7542083018acecb845e90e843f11df6dc04bc0f7b3272515faebbb52edb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726777468921716594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe7d4ce0-eede-4cba-baa0-35c84659f1a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.408797234Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=997e13d7-010e-48c6-8565-106355c59dfa name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.408925741Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=997e13d7-010e-48c6-8565-106355c59dfa name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.410605351Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eaee8eb8-8228-4360-89fa-1e7b56d26fab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.411129875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777544411101177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eaee8eb8-8228-4360-89fa-1e7b56d26fab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.412037121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ff21d52-b185-4fa9-b253-504bec5f5659 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.412454448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ff21d52-b185-4fa9-b253-504bec5f5659 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.415770227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de8a09a54a42c156151203cf80a494b94bef7c73fae0a05bb5688ce9b28ca67c,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726777527204269816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e80bcabc5ca3d9d4349b250687457360e4de3f5dd89703acb3893b93321e09f,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726777527174258507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e843ffd50bd660fb7e78baa93b67b7746b8af6d27b94c18b6496f0e90b9155f,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726777527193913824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d97dcc85cdba7cab90e81240c278a75d7fc25d02b77a2b074daf3d47a45621,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726777527184981401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380634a5fb6fe0144cdd4083f6dd943bf7959be1b5b600b745ddf354e0ef297b,PodSandboxId:9266d27c0ea5274dda2248e6e4a217a8a1ce2ed38fcc186bc2a015898d880329,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726777514754281739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52410b8c9de7dbf380fcd01b1db2956cdadda87967c76b041ae0b4e706b42650,PodSandboxId:0af544a08520eaf859764a25488be6287e7f26ccab73dcc333b4e2adf216db6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726777514053779153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726777513973523845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726777514005950750,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726777513895313077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726777513844638755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d,PodSandboxId:ddd45b63ed24668d280fde66288d11234dab7e841444d8c7d4b9b2d2dbfc653f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726777469349246386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3,PodSandboxId:7d0fe7542083018acecb845e90e843f11df6dc04bc0f7b3272515faebbb52edb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726777468921716594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ff21d52-b185-4fa9-b253-504bec5f5659 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.479343416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe67cc1e-3ea0-4a0f-b8bd-b1d739e3aa25 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.479446325Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe67cc1e-3ea0-4a0f-b8bd-b1d739e3aa25 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.481336729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=84ac3468-39b8-4b70-bf92-18cfd257c742 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.481869028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777544481832430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84ac3468-39b8-4b70-bf92-18cfd257c742 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.482566418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4a9b230-d832-486c-b5d6-4f93daa7f8df name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.482667541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4a9b230-d832-486c-b5d6-4f93daa7f8df name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.483174094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de8a09a54a42c156151203cf80a494b94bef7c73fae0a05bb5688ce9b28ca67c,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726777527204269816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e80bcabc5ca3d9d4349b250687457360e4de3f5dd89703acb3893b93321e09f,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726777527174258507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e843ffd50bd660fb7e78baa93b67b7746b8af6d27b94c18b6496f0e90b9155f,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726777527193913824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d97dcc85cdba7cab90e81240c278a75d7fc25d02b77a2b074daf3d47a45621,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726777527184981401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380634a5fb6fe0144cdd4083f6dd943bf7959be1b5b600b745ddf354e0ef297b,PodSandboxId:9266d27c0ea5274dda2248e6e4a217a8a1ce2ed38fcc186bc2a015898d880329,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726777514754281739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52410b8c9de7dbf380fcd01b1db2956cdadda87967c76b041ae0b4e706b42650,PodSandboxId:0af544a08520eaf859764a25488be6287e7f26ccab73dcc333b4e2adf216db6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726777514053779153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726777513973523845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726777514005950750,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726777513895313077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726777513844638755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d,PodSandboxId:ddd45b63ed24668d280fde66288d11234dab7e841444d8c7d4b9b2d2dbfc653f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726777469349246386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3,PodSandboxId:7d0fe7542083018acecb845e90e843f11df6dc04bc0f7b3272515faebbb52edb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726777468921716594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4a9b230-d832-486c-b5d6-4f93daa7f8df name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.543547624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8adaeb6-db22-4a99-9347-c54c52b9e5bf name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.543666853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8adaeb6-db22-4a99-9347-c54c52b9e5bf name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.545775890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7627c246-8b3a-4ffc-a0d9-48a3bd1d1457 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.546362563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777544546330302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7627c246-8b3a-4ffc-a0d9-48a3bd1d1457 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.547028212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f64f603-73f8-4bc2-be1e-53034160f836 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.547124245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f64f603-73f8-4bc2-be1e-53034160f836 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:44 pause-670672 crio[2320]: time="2024-09-19 20:25:44.547534184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de8a09a54a42c156151203cf80a494b94bef7c73fae0a05bb5688ce9b28ca67c,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726777527204269816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e80bcabc5ca3d9d4349b250687457360e4de3f5dd89703acb3893b93321e09f,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726777527174258507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e843ffd50bd660fb7e78baa93b67b7746b8af6d27b94c18b6496f0e90b9155f,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726777527193913824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d97dcc85cdba7cab90e81240c278a75d7fc25d02b77a2b074daf3d47a45621,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726777527184981401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380634a5fb6fe0144cdd4083f6dd943bf7959be1b5b600b745ddf354e0ef297b,PodSandboxId:9266d27c0ea5274dda2248e6e4a217a8a1ce2ed38fcc186bc2a015898d880329,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726777514754281739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52410b8c9de7dbf380fcd01b1db2956cdadda87967c76b041ae0b4e706b42650,PodSandboxId:0af544a08520eaf859764a25488be6287e7f26ccab73dcc333b4e2adf216db6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726777514053779153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726777513973523845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726777514005950750,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726777513895313077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726777513844638755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d,PodSandboxId:ddd45b63ed24668d280fde66288d11234dab7e841444d8c7d4b9b2d2dbfc653f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726777469349246386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3,PodSandboxId:7d0fe7542083018acecb845e90e843f11df6dc04bc0f7b3272515faebbb52edb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726777468921716594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f64f603-73f8-4bc2-be1e-53034160f836 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	de8a09a54a42c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   17 seconds ago       Running             kube-scheduler            2                   4ac4459f40399       kube-scheduler-pause-670672
	6e843ffd50bd6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   17 seconds ago       Running             kube-apiserver            2                   3e1c527f24a39       kube-apiserver-pause-670672
	a6d97dcc85cdb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 seconds ago       Running             kube-controller-manager   2                   5d9cbfbf07e49       kube-controller-manager-pause-670672
	9e80bcabc5ca3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 seconds ago       Running             etcd                      2                   e579f01095766       etcd-pause-670672
	380634a5fb6fe       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   29 seconds ago       Running             coredns                   1                   9266d27c0ea52       coredns-7c65d6cfc9-jmxnk
	52410b8c9de7d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   30 seconds ago       Running             kube-proxy                1                   0af544a08520e       kube-proxy-jb8pb
	aa0824d46c53f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   30 seconds ago       Exited              kube-controller-manager   1                   5d9cbfbf07e49       kube-controller-manager-pause-670672
	ca11197e6bec7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   30 seconds ago       Exited              kube-apiserver            1                   3e1c527f24a39       kube-apiserver-pause-670672
	ecc20d16db725       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   30 seconds ago       Exited              etcd                      1                   e579f01095766       etcd-pause-670672
	22f2310aed4d6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   30 seconds ago       Exited              kube-scheduler            1                   4ac4459f40399       kube-scheduler-pause-670672
	b1553793a79d7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   ddd45b63ed246       coredns-7c65d6cfc9-jmxnk
	4cae729f51a0f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                0                   7d0fe75420830       kube-proxy-jb8pb
	
	
	==> coredns [380634a5fb6fe0144cdd4083f6dd943bf7959be1b5b600b745ddf354e0ef297b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35903 - 54935 "HINFO IN 3485892769307809059.5083327303141623088. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.209846883s
	
	
	==> coredns [b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d] <==
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[203332907]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 20:24:29.577) (total time: 30003ms):
	Trace[203332907]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:24:59.578)
	Trace[203332907]: [30.003329823s] [30.003329823s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1353835626]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 20:24:29.580) (total time: 30002ms):
	Trace[1353835626]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:24:59.582)
	Trace[1353835626]: [30.002148747s] [30.002148747s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2003631328]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 20:24:29.578) (total time: 30006ms):
	Trace[2003631328]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30006ms (20:24:59.584)
	Trace[2003631328]: [30.00621084s] [30.00621084s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57352 - 42934 "HINFO IN 8696267387429721357.7079103495970484133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015361275s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-670672
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-670672
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=pause-670672
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T20_24_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 20:24:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-670672
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 20:25:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 20:25:31 +0000   Thu, 19 Sep 2024 20:24:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 20:25:31 +0000   Thu, 19 Sep 2024 20:24:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 20:25:31 +0000   Thu, 19 Sep 2024 20:24:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 20:25:31 +0000   Thu, 19 Sep 2024 20:24:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    pause-670672
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 26db837093c84f2ca63323ffe31863d7
	  System UUID:                26db8370-93c8-4f2c-a633-23ffe31863d7
	  Boot ID:                    44145e9b-ac01-4a7e-a8ad-2aad874c57bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jmxnk                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     76s
	  kube-system                 etcd-pause-670672                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         81s
	  kube-system                 kube-apiserver-pause-670672             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-670672    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-jb8pb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-670672             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientPID     87s (x7 over 87s)  kubelet          Node pause-670672 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)  kubelet          Node pause-670672 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)  kubelet          Node pause-670672 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node pause-670672 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node pause-670672 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s                kubelet          Node pause-670672 status is now: NodeHasSufficientPID
	  Normal  NodeReady                80s                kubelet          Node pause-670672 status is now: NodeReady
	  Normal  RegisteredNode           77s                node-controller  Node pause-670672 event: Registered Node pause-670672 in Controller
	  Normal  Starting                 18s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node pause-670672 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node pause-670672 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)  kubelet          Node pause-670672 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-670672 event: Registered Node pause-670672 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.230249] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.065592] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057565] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.174666] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.160294] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.294651] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +4.090307] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +5.433575] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.058227] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.002611] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.081212] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.316546] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[  +0.106926] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.709554] kauditd_printk_skb: 99 callbacks suppressed
	[Sep19 20:25] systemd-fstab-generator[2246]: Ignoring "noauto" option for root device
	[  +0.138160] systemd-fstab-generator[2258]: Ignoring "noauto" option for root device
	[  +0.181402] systemd-fstab-generator[2272]: Ignoring "noauto" option for root device
	[  +0.153744] systemd-fstab-generator[2284]: Ignoring "noauto" option for root device
	[  +0.318893] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	[  +0.717615] systemd-fstab-generator[2432]: Ignoring "noauto" option for root device
	[  +4.301239] kauditd_printk_skb: 196 callbacks suppressed
	[  +9.263094] systemd-fstab-generator[3249]: Ignoring "noauto" option for root device
	[  +8.401310] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.386313] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	
	
	==> etcd [9e80bcabc5ca3d9d4349b250687457360e4de3f5dd89703acb3893b93321e09f] <==
	{"level":"warn","ts":"2024-09-19T20:25:36.895043Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180177Z","time spent":"714.833576ms","remote":"127.0.0.1:39288","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:457 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2024-09-19T20:25:36.895135Z","caller":"traceutil/trace.go:171","msg":"trace[682819107] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"714.665673ms","start":"2024-09-19T20:25:36.180461Z","end":"2024-09-19T20:25:36.895127Z","steps":["trace[682819107] 'process raft request'  (duration: 714.392157ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:25:36.895158Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180417Z","time spent":"714.730153ms","remote":"127.0.0.1:39604","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:455 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2024-09-19T20:25:36.895281Z","caller":"traceutil/trace.go:171","msg":"trace[1102741383] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"627.760741ms","start":"2024-09-19T20:25:36.267508Z","end":"2024-09-19T20:25:36.895269Z","steps":["trace[1102741383] 'process raft request'  (duration: 627.404835ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T20:25:36.895314Z","caller":"traceutil/trace.go:171","msg":"trace[1753253500] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:498; }","duration":"714.985149ms","start":"2024-09-19T20:25:36.180322Z","end":"2024-09-19T20:25:36.895307Z","steps":["trace[1753253500] 'read index received'  (duration: 82.093953ms)","trace[1753253500] 'applied index is now lower than readState.Index'  (duration: 632.890646ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T20:25:36.895347Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.267490Z","time spent":"627.819506ms","remote":"127.0.0.1:39418","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-6kxlp\" mod_revision:460 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-6kxlp\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-6kxlp\" > >"}
	{"level":"warn","ts":"2024-09-19T20:25:36.895449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"715.119349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2024-09-19T20:25:36.895468Z","caller":"traceutil/trace.go:171","msg":"trace[63308975] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:463; }","duration":"715.144188ms","start":"2024-09-19T20:25:36.180318Z","end":"2024-09-19T20:25:36.895462Z","steps":["trace[63308975] 'agreement among raft nodes before linearized reading'  (duration: 715.054259ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:25:36.895486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180286Z","time spent":"715.19366ms","remote":"127.0.0.1:39226","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":1,"response size":392,"request content":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" "}
	{"level":"warn","ts":"2024-09-19T20:25:36.895589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"715.252118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-09-19T20:25:36.895602Z","caller":"traceutil/trace.go:171","msg":"trace[1299525818] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:463; }","duration":"715.265501ms","start":"2024-09-19T20:25:36.180333Z","end":"2024-09-19T20:25:36.895598Z","steps":["trace[1299525818] 'agreement among raft nodes before linearized reading'  (duration: 715.231118ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:25:36.895615Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180317Z","time spent":"715.294836ms","remote":"127.0.0.1:39332","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":229,"request content":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" "}
	{"level":"warn","ts":"2024-09-19T20:25:36.895690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"715.254624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-670672\" ","response":"range_response_count:1 size:5851"}
	{"level":"info","ts":"2024-09-19T20:25:36.895702Z","caller":"traceutil/trace.go:171","msg":"trace[1426666188] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-670672; range_end:; response_count:1; response_revision:463; }","duration":"715.270337ms","start":"2024-09-19T20:25:36.180428Z","end":"2024-09-19T20:25:36.895698Z","steps":["trace[1426666188] 'agreement among raft nodes before linearized reading'  (duration: 715.241556ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:25:36.895713Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180419Z","time spent":"715.291396ms","remote":"127.0.0.1:39312","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5873,"request content":"key:\"/registry/pods/kube-system/etcd-pause-670672\" "}
	{"level":"warn","ts":"2024-09-19T20:25:36.895741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"715.328513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-jmxnk\" ","response":"range_response_count:1 size:5149"}
	{"level":"info","ts":"2024-09-19T20:25:36.895792Z","caller":"traceutil/trace.go:171","msg":"trace[638520599] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-jmxnk; range_end:; response_count:1; response_revision:463; }","duration":"715.377578ms","start":"2024-09-19T20:25:36.180407Z","end":"2024-09-19T20:25:36.895784Z","steps":["trace[638520599] 'agreement among raft nodes before linearized reading'  (duration: 715.300738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:25:36.895825Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180393Z","time spent":"715.418379ms","remote":"127.0.0.1:39312","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":5171,"request content":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-jmxnk\" "}
	{"level":"info","ts":"2024-09-19T20:25:37.191252Z","caller":"traceutil/trace.go:171","msg":"trace[1162924208] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:501; }","duration":"278.704688ms","start":"2024-09-19T20:25:36.912470Z","end":"2024-09-19T20:25:37.191175Z","steps":["trace[1162924208] 'read index received'  (duration: 271.665399ms)","trace[1162924208] 'applied index is now lower than readState.Index'  (duration: 7.038758ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T20:25:37.191390Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.896388ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-670672\" ","response":"range_response_count:1 size:5851"}
	{"level":"info","ts":"2024-09-19T20:25:37.191433Z","caller":"traceutil/trace.go:171","msg":"trace[1909379005] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-670672; range_end:; response_count:1; response_revision:464; }","duration":"278.958706ms","start":"2024-09-19T20:25:36.912467Z","end":"2024-09-19T20:25:37.191426Z","steps":["trace[1909379005] 'agreement among raft nodes before linearized reading'  (duration: 278.826623ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T20:25:37.191563Z","caller":"traceutil/trace.go:171","msg":"trace[2038750672] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"281.908491ms","start":"2024-09-19T20:25:36.909642Z","end":"2024-09-19T20:25:37.191551Z","steps":["trace[2038750672] 'process raft request'  (duration: 274.477777ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T20:25:37.192241Z","caller":"traceutil/trace.go:171","msg":"trace[1305136966] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"278.743593ms","start":"2024-09-19T20:25:36.913434Z","end":"2024-09-19T20:25:37.192178Z","steps":["trace[1305136966] 'process raft request'  (duration: 278.618176ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T20:25:37.192373Z","caller":"traceutil/trace.go:171","msg":"trace[1673847013] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"278.351969ms","start":"2024-09-19T20:25:36.914014Z","end":"2024-09-19T20:25:37.192366Z","steps":["trace[1673847013] 'process raft request'  (duration: 278.113834ms)"],"step_count":1}
	
	
	==> etcd [ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a] <==
	{"level":"info","ts":"2024-09-19T20:25:15.459920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-19T20:25:15.459978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e received MsgPreVoteResp from 32f03a72bea6354e at term 2"}
	{"level":"info","ts":"2024-09-19T20:25:15.460016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e became candidate at term 3"}
	{"level":"info","ts":"2024-09-19T20:25:15.460041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e received MsgVoteResp from 32f03a72bea6354e at term 3"}
	{"level":"info","ts":"2024-09-19T20:25:15.460068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e became leader at term 3"}
	{"level":"info","ts":"2024-09-19T20:25:15.460093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 32f03a72bea6354e elected leader 32f03a72bea6354e at term 3"}
	{"level":"info","ts":"2024-09-19T20:25:15.462028Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T20:25:15.462263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T20:25:15.462610Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T20:25:15.462649Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T20:25:15.462066Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"32f03a72bea6354e","local-member-attributes":"{Name:pause-670672 ClientURLs:[https://192.168.39.136:2379]}","request-path":"/0/members/32f03a72bea6354e/attributes","cluster-id":"6fc8639e731f3dca","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T20:25:15.463482Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T20:25:15.463702Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T20:25:15.464562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.136:2379"}
	{"level":"info","ts":"2024-09-19T20:25:15.464803Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T20:25:24.718660Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-19T20:25:24.718720Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-670672","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.136:2380"],"advertise-client-urls":["https://192.168.39.136:2379"]}
	{"level":"warn","ts":"2024-09-19T20:25:24.718829Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:25:24.718859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:25:24.720472Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.136:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:25:24.720512Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.136:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-19T20:25:24.721875Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"32f03a72bea6354e","current-leader-member-id":"32f03a72bea6354e"}
	{"level":"info","ts":"2024-09-19T20:25:24.725662Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.136:2380"}
	{"level":"info","ts":"2024-09-19T20:25:24.725742Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.136:2380"}
	{"level":"info","ts":"2024-09-19T20:25:24.725753Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-670672","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.136:2380"],"advertise-client-urls":["https://192.168.39.136:2379"]}
	
	
	==> kernel <==
	 20:25:45 up 1 min,  0 users,  load average: 0.76, 0.34, 0.12
	Linux pause-670672 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6e843ffd50bd660fb7e78baa93b67b7746b8af6d27b94c18b6496f0e90b9155f] <==
	I0919 20:25:31.192439       1 aggregator.go:171] initial CRD sync complete...
	I0919 20:25:31.192483       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 20:25:31.192493       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 20:25:31.192500       1 cache.go:39] Caches are synced for autoregister controller
	I0919 20:25:31.194367       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 20:25:31.194400       1 policy_source.go:224] refreshing policies
	I0919 20:25:31.201054       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 20:25:31.201529       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0919 20:25:31.201580       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0919 20:25:31.206387       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0919 20:25:31.217032       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 20:25:31.247496       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 20:25:31.247181       1 shared_informer.go:320] Caches are synced for configmaps
	I0919 20:25:31.248528       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0919 20:25:31.271442       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0919 20:25:31.277405       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0919 20:25:32.051481       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 20:25:32.286474       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.136]
	I0919 20:25:32.288031       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 20:25:32.298083       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 20:25:32.468618       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0919 20:25:32.482053       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0919 20:25:32.518520       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 20:25:32.554243       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 20:25:32.561090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7] <==
	I0919 20:25:17.091952       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0919 20:25:17.092032       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 20:25:17.092391       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 20:25:17.091376       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 20:25:17.095404       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0919 20:25:17.100134       1 controller.go:157] Shutting down quota evaluator
	I0919 20:25:17.100488       1 controller.go:176] quota evaluator worker shutdown
	I0919 20:25:17.100593       1 controller.go:176] quota evaluator worker shutdown
	I0919 20:25:17.100692       1 controller.go:176] quota evaluator worker shutdown
	I0919 20:25:17.100717       1 controller.go:176] quota evaluator worker shutdown
	I0919 20:25:17.100739       1 controller.go:176] quota evaluator worker shutdown
	W0919 20:25:17.821751       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:17.822668       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:18.821016       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:18.822598       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:19.821050       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:19.822747       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:20.821347       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:20.823368       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:21.821857       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:21.822255       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:22.822114       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:22.822134       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:23.820979       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:23.822762       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [a6d97dcc85cdba7cab90e81240c278a75d7fc25d02b77a2b074daf3d47a45621] <==
	I0919 20:25:34.493245       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0919 20:25:34.495648       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0919 20:25:34.499132       1 shared_informer.go:320] Caches are synced for ephemeral
	I0919 20:25:34.500379       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0919 20:25:34.501792       1 shared_informer.go:320] Caches are synced for GC
	I0919 20:25:34.507166       1 shared_informer.go:320] Caches are synced for taint
	I0919 20:25:34.507371       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 20:25:34.507620       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-670672"
	I0919 20:25:34.507796       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 20:25:34.513151       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0919 20:25:34.514470       1 shared_informer.go:320] Caches are synced for TTL
	I0919 20:25:34.514592       1 shared_informer.go:320] Caches are synced for attach detach
	I0919 20:25:34.636673       1 shared_informer.go:320] Caches are synced for deployment
	I0919 20:25:34.663614       1 shared_informer.go:320] Caches are synced for disruption
	I0919 20:25:34.673462       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 20:25:34.677015       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 20:25:34.714056       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0919 20:25:34.714316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="165.835µs"
	I0919 20:25:35.108532       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 20:25:35.131164       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 20:25:35.131254       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 20:25:37.195521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.025699591s"
	I0919 20:25:37.195627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="54.473µs"
	I0919 20:25:37.229510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.928424ms"
	I0919 20:25:37.229773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.811µs"
	
	
	==> kube-controller-manager [aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9] <==
	
	
	==> kube-proxy [4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 20:24:29.619313       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 20:24:29.636506       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.136"]
	E0919 20:24:29.636877       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 20:24:29.688522       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 20:24:29.688577       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 20:24:29.688612       1 server_linux.go:169] "Using iptables Proxier"
	I0919 20:24:29.693718       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 20:24:29.694628       1 server.go:483] "Version info" version="v1.31.1"
	I0919 20:24:29.694684       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:24:29.698245       1 config.go:199] "Starting service config controller"
	I0919 20:24:29.698444       1 config.go:105] "Starting endpoint slice config controller"
	I0919 20:24:29.698734       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 20:24:29.698814       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 20:24:29.699149       1 config.go:328] "Starting node config controller"
	I0919 20:24:29.699317       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 20:24:29.799411       1 shared_informer.go:320] Caches are synced for service config
	I0919 20:24:29.799507       1 shared_informer.go:320] Caches are synced for node config
	I0919 20:24:29.800681       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [52410b8c9de7dbf380fcd01b1db2956cdadda87967c76b041ae0b4e706b42650] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 20:25:15.180664       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 20:25:16.988328       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.136"]
	E0919 20:25:16.988494       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 20:25:17.039667       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 20:25:17.039724       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 20:25:17.039753       1 server_linux.go:169] "Using iptables Proxier"
	I0919 20:25:17.042467       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 20:25:17.043162       1 server.go:483] "Version info" version="v1.31.1"
	I0919 20:25:17.043242       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:25:17.044639       1 config.go:199] "Starting service config controller"
	I0919 20:25:17.044685       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 20:25:17.044718       1 config.go:105] "Starting endpoint slice config controller"
	I0919 20:25:17.044738       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 20:25:17.046659       1 config.go:328] "Starting node config controller"
	I0919 20:25:17.046693       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 20:25:17.145092       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 20:25:17.145184       1 shared_informer.go:320] Caches are synced for service config
	I0919 20:25:17.146765       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a] <==
	I0919 20:25:15.112233       1 serving.go:386] Generated self-signed cert in-memory
	W0919 20:25:16.921996       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 20:25:16.922100       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 20:25:16.922136       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 20:25:16.922255       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 20:25:16.993558       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0919 20:25:16.993652       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:25:16.996322       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 20:25:16.996406       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:25:16.999663       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 20:25:16.999739       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 20:25:17.097481       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:25:24.861995       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0919 20:25:24.862124       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0919 20:25:24.862831       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [de8a09a54a42c156151203cf80a494b94bef7c73fae0a05bb5688ce9b28ca67c] <==
	I0919 20:25:28.469558       1 serving.go:386] Generated self-signed cert in-memory
	I0919 20:25:31.277068       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0919 20:25:31.277297       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:25:31.286549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 20:25:31.286665       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 20:25:31.286898       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:25:31.286962       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 20:25:31.286989       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 20:25:31.287013       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0919 20:25:31.286589       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0919 20:25:31.287168       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0919 20:25:31.387724       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:25:31.388281       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0919 20:25:31.388617       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Sep 19 20:25:26 pause-670672 kubelet[3256]: I0919 20:25:26.903278    3256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/716a3c3519e4de3363bc1ab1d98f6763-usr-share-ca-certificates\") pod \"kube-apiserver-pause-670672\" (UID: \"716a3c3519e4de3363bc1ab1d98f6763\") " pod="kube-system/kube-apiserver-pause-670672"
	Sep 19 20:25:26 pause-670672 kubelet[3256]: I0919 20:25:26.903294    3256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/179d596e0ace88d24ae2cbcfd254ccf6-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-670672\" (UID: \"179d596e0ace88d24ae2cbcfd254ccf6\") " pod="kube-system/kube-controller-manager-pause-670672"
	Sep 19 20:25:26 pause-670672 kubelet[3256]: I0919 20:25:26.903308    3256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7394ade2d983f4ac0e2571a895778847-etcd-certs\") pod \"etcd-pause-670672\" (UID: \"7394ade2d983f4ac0e2571a895778847\") " pod="kube-system/etcd-pause-670672"
	Sep 19 20:25:26 pause-670672 kubelet[3256]: E0919 20:25:26.904697    3256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-670672?timeout=10s\": dial tcp 192.168.39.136:8443: connect: connection refused" interval="400ms"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.083037    3256 kubelet_node_status.go:72] "Attempting to register node" node="pause-670672"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: E0919 20:25:27.083956    3256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.136:8443: connect: connection refused" node="pause-670672"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.156961    3256 scope.go:117] "RemoveContainer" containerID="ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.157374    3256 scope.go:117] "RemoveContainer" containerID="ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.159362    3256 scope.go:117] "RemoveContainer" containerID="aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.160135    3256 scope.go:117] "RemoveContainer" containerID="22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: E0919 20:25:27.306710    3256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-670672?timeout=10s\": dial tcp 192.168.39.136:8443: connect: connection refused" interval="800ms"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.485866    3256 kubelet_node_status.go:72] "Attempting to register node" node="pause-670672"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: E0919 20:25:27.486686    3256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.136:8443: connect: connection refused" node="pause-670672"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: E0919 20:25:27.504640    3256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.136:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-670672.17f6bf02fc3a1670  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-670672,UID:pause-670672,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:pause-670672,},FirstTimestamp:2024-09-19 20:25:26.685668976 +0000 UTC m=+0.113961314,LastTimestamp:2024-09-19 20:25:26.685668976 +0000 UTC m=+0.113961314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-670672,}"
	Sep 19 20:25:28 pause-670672 kubelet[3256]: I0919 20:25:28.287970    3256 kubelet_node_status.go:72] "Attempting to register node" node="pause-670672"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.316817    3256 kubelet_node_status.go:111] "Node was previously registered" node="pause-670672"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.317500    3256 kubelet_node_status.go:75] "Successfully registered node" node="pause-670672"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.317665    3256 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.319121    3256 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.684566    3256 apiserver.go:52] "Watching apiserver"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.698797    3256 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.743425    3256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24fc727e-56bc-48fc-bb7d-6fd965042da0-xtables-lock\") pod \"kube-proxy-jb8pb\" (UID: \"24fc727e-56bc-48fc-bb7d-6fd965042da0\") " pod="kube-system/kube-proxy-jb8pb"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.743566    3256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24fc727e-56bc-48fc-bb7d-6fd965042da0-lib-modules\") pod \"kube-proxy-jb8pb\" (UID: \"24fc727e-56bc-48fc-bb7d-6fd965042da0\") " pod="kube-system/kube-proxy-jb8pb"
	Sep 19 20:25:36 pause-670672 kubelet[3256]: E0919 20:25:36.784029    3256 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777536783419224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:25:36 pause-670672 kubelet[3256]: E0919 20:25:36.784518    3256 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777536783419224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 20:25:43.987017   61772 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19664-7917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-670672 -n pause-670672
helpers_test.go:261: (dbg) Run:  kubectl --context pause-670672 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-670672 -n pause-670672
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-670672 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-670672 logs -n 25: (1.421152065s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-801740 sudo find         | cilium-801740             | jenkins | v1.34.0 | 19 Sep 24 20:20 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p cilium-801740 sudo crio         | cilium-801740             | jenkins | v1.34.0 | 19 Sep 24 20:20 UTC |                     |
	|         | config                             |                           |         |         |                     |                     |
	| delete  | -p cilium-801740                   | cilium-801740             | jenkins | v1.34.0 | 19 Sep 24 20:20 UTC | 19 Sep 24 20:20 UTC |
	| start   | -p cert-expiration-478436          | cert-expiration-478436    | jenkins | v1.34.0 | 19 Sep 24 20:20 UTC | 19 Sep 24 20:22 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:21 UTC | 19 Sep 24 20:22 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-011213             | offline-crio-011213       | jenkins | v1.34.0 | 19 Sep 24 20:21 UTC | 19 Sep 24 20:21 UTC |
	| start   | -p force-systemd-flag-013710       | force-systemd-flag-013710 | jenkins | v1.34.0 | 19 Sep 24 20:21 UTC | 19 Sep 24 20:22 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:22 UTC |
	| start   | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:22 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-070299          | running-upgrade-070299    | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:23 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-013710 ssh cat  | force-systemd-flag-013710 | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:22 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-013710       | force-systemd-flag-013710 | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:22 UTC |
	| start   | -p kubernetes-upgrade-342125       | kubernetes-upgrade-342125 | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-045748 sudo        | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:22 UTC |
	| start   | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:22 UTC | 19 Sep 24 20:23 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-045748 sudo        | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:23 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-045748             | NoKubernetes-045748       | jenkins | v1.34.0 | 19 Sep 24 20:23 UTC | 19 Sep 24 20:23 UTC |
	| start   | -p pause-670672 --memory=2048      | pause-670672              | jenkins | v1.34.0 | 19 Sep 24 20:23 UTC | 19 Sep 24 20:25 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-070299          | running-upgrade-070299    | jenkins | v1.34.0 | 19 Sep 24 20:23 UTC | 19 Sep 24 20:23 UTC |
	| start   | -p stopped-upgrade-927381          | minikube                  | jenkins | v1.26.0 | 19 Sep 24 20:23 UTC | 19 Sep 24 20:25 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p pause-670672                    | pause-670672              | jenkins | v1.34.0 | 19 Sep 24 20:25 UTC | 19 Sep 24 20:25 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-927381 stop        | minikube                  | jenkins | v1.26.0 | 19 Sep 24 20:25 UTC | 19 Sep 24 20:25 UTC |
	| start   | -p stopped-upgrade-927381          | stopped-upgrade-927381    | jenkins | v1.34.0 | 19 Sep 24 20:25 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-478436          | cert-expiration-478436    | jenkins | v1.34.0 | 19 Sep 24 20:25 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h            |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 20:25:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 20:25:14.361224   61525 out.go:345] Setting OutFile to fd 1 ...
	I0919 20:25:14.361358   61525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:25:14.361363   61525 out.go:358] Setting ErrFile to fd 2...
	I0919 20:25:14.361368   61525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:25:14.361654   61525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 20:25:14.362381   61525 out.go:352] Setting JSON to false
	I0919 20:25:14.363639   61525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7658,"bootTime":1726769856,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 20:25:14.363747   61525 start.go:139] virtualization: kvm guest
	I0919 20:25:14.366140   61525 out.go:177] * [cert-expiration-478436] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 20:25:14.367936   61525 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 20:25:14.367956   61525 notify.go:220] Checking for updates...
	I0919 20:25:14.371084   61525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 20:25:14.372612   61525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 20:25:14.374004   61525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 20:25:14.375363   61525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 20:25:14.376563   61525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 20:25:12.614568   61318 main.go:141] libmachine: (pause-670672) Calling .GetIP
	I0919 20:25:12.617799   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:12.618184   61318 main.go:141] libmachine: (pause-670672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:36:12", ip: ""} in network mk-pause-670672: {Iface:virbr1 ExpiryTime:2024-09-19 21:23:59 +0000 UTC Type:0 Mac:52:54:00:ec:36:12 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:pause-670672 Clientid:01:52:54:00:ec:36:12}
	I0919 20:25:12.618208   61318 main.go:141] libmachine: (pause-670672) DBG | domain pause-670672 has defined IP address 192.168.39.136 and MAC address 52:54:00:ec:36:12 in network mk-pause-670672
	I0919 20:25:12.618418   61318 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 20:25:12.623293   61318 kubeadm.go:883] updating cluster {Name:pause-670672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-670672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 20:25:12.623415   61318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 20:25:12.623456   61318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:25:12.677493   61318 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 20:25:12.677518   61318 crio.go:433] Images already preloaded, skipping extraction
	I0919 20:25:12.677566   61318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 20:25:12.715420   61318 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 20:25:12.715444   61318 cache_images.go:84] Images are preloaded, skipping loading
	I0919 20:25:12.715454   61318 kubeadm.go:934] updating node { 192.168.39.136 8443 v1.31.1 crio true true} ...
	I0919 20:25:12.715565   61318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-670672 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-670672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 20:25:12.715628   61318 ssh_runner.go:195] Run: crio config
	I0919 20:25:12.766799   61318 cni.go:84] Creating CNI manager for ""
	I0919 20:25:12.766818   61318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 20:25:12.766827   61318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 20:25:12.766848   61318 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-670672 NodeName:pause-670672 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 20:25:12.766976   61318 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-670672"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 20:25:12.767031   61318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 20:25:12.777436   61318 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 20:25:12.777512   61318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 20:25:12.787300   61318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 20:25:12.808085   61318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 20:25:12.830467   61318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0919 20:25:12.853427   61318 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I0919 20:25:12.857604   61318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 20:25:13.004416   61318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 20:25:13.020112   61318 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672 for IP: 192.168.39.136
	I0919 20:25:13.020140   61318 certs.go:194] generating shared ca certs ...
	I0919 20:25:13.020159   61318 certs.go:226] acquiring lock for ca certs: {Name:mk94a3800903b572340719dd59bb8828a2560e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 20:25:13.020353   61318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key
	I0919 20:25:13.020414   61318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key
	I0919 20:25:13.020429   61318 certs.go:256] generating profile certs ...
	I0919 20:25:13.020514   61318 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/client.key
	I0919 20:25:13.020572   61318 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/apiserver.key.cb6f5b4b
	I0919 20:25:13.020605   61318 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/proxy-client.key
	I0919 20:25:13.020724   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem (1338 bytes)
	W0919 20:25:13.020765   61318 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116_empty.pem, impossibly tiny 0 bytes
	I0919 20:25:13.020775   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 20:25:13.020798   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/ca.pem (1078 bytes)
	I0919 20:25:13.020821   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/cert.pem (1123 bytes)
	I0919 20:25:13.020843   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/certs/key.pem (1679 bytes)
	I0919 20:25:13.020880   61318 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem (1708 bytes)
	I0919 20:25:13.021568   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 20:25:13.050492   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 20:25:13.077047   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 20:25:13.107924   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 20:25:13.132521   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 20:25:13.163359   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 20:25:13.189997   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 20:25:13.217658   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/pause-670672/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 20:25:13.243146   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/certs/15116.pem --> /usr/share/ca-certificates/15116.pem (1338 bytes)
	I0919 20:25:13.268846   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/ssl/certs/151162.pem --> /usr/share/ca-certificates/151162.pem (1708 bytes)
	I0919 20:25:13.304781   61318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-7917/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 20:25:13.331376   61318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 20:25:13.351157   61318 ssh_runner.go:195] Run: openssl version
	I0919 20:25:13.357672   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 20:25:13.369679   61318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:25:13.374748   61318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:25:13.374810   61318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 20:25:13.380688   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 20:25:13.389974   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15116.pem && ln -fs /usr/share/ca-certificates/15116.pem /etc/ssl/certs/15116.pem"
	I0919 20:25:13.400588   61318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15116.pem
	I0919 20:25:13.405089   61318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 19:20 /usr/share/ca-certificates/15116.pem
	I0919 20:25:13.405152   61318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15116.pem
	I0919 20:25:13.411128   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15116.pem /etc/ssl/certs/51391683.0"
	I0919 20:25:13.421435   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151162.pem && ln -fs /usr/share/ca-certificates/151162.pem /etc/ssl/certs/151162.pem"
	I0919 20:25:13.434234   61318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151162.pem
	I0919 20:25:13.438796   61318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 19:20 /usr/share/ca-certificates/151162.pem
	I0919 20:25:13.438863   61318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151162.pem
	I0919 20:25:13.445690   61318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151162.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 20:25:13.455847   61318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 20:25:13.460864   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 20:25:13.466459   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 20:25:13.476218   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 20:25:13.495704   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 20:25:13.547017   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 20:25:13.580141   61318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 20:25:13.601459   61318 kubeadm.go:392] StartCluster: {Name:pause-670672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-670672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 20:25:13.601570   61318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 20:25:13.601619   61318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 20:25:13.810200   61318 cri.go:89] found id: "b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d"
	I0919 20:25:13.810229   61318 cri.go:89] found id: "4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3"
	I0919 20:25:13.810235   61318 cri.go:89] found id: "01c6b4268b2db0b252c7cecedb98a4fca95853acf13f51176e93d558eaeddc90"
	I0919 20:25:13.810241   61318 cri.go:89] found id: "03a58205b86bf31a7741f4dfc3a5772e5ea2d288faa422997f3dd29f41c6881e"
	I0919 20:25:13.810245   61318 cri.go:89] found id: "a114fcf124488bd89136755c5b47fa6c412ac3d6cdbc4ab73481696b685c248b"
	I0919 20:25:13.810250   61318 cri.go:89] found id: "3d8f8663344aef94598951fa7b68c35231b00bfae6325f6ebc3d9e38848e2c1e"
	I0919 20:25:13.810253   61318 cri.go:89] found id: ""
	I0919 20:25:13.810308   61318 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.510658238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777546510620060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae7946ad-e460-4779-838d-65225fbb3f30 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.511330894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cf64c3e-3db2-491d-9cf6-83698b273a24 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.511404749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cf64c3e-3db2-491d-9cf6-83698b273a24 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.513501669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de8a09a54a42c156151203cf80a494b94bef7c73fae0a05bb5688ce9b28ca67c,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726777527204269816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e80bcabc5ca3d9d4349b250687457360e4de3f5dd89703acb3893b93321e09f,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726777527174258507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e843ffd50bd660fb7e78baa93b67b7746b8af6d27b94c18b6496f0e90b9155f,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726777527193913824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d97dcc85cdba7cab90e81240c278a75d7fc25d02b77a2b074daf3d47a45621,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726777527184981401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380634a5fb6fe0144cdd4083f6dd943bf7959be1b5b600b745ddf354e0ef297b,PodSandboxId:9266d27c0ea5274dda2248e6e4a217a8a1ce2ed38fcc186bc2a015898d880329,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726777514754281739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52410b8c9de7dbf380fcd01b1db2956cdadda87967c76b041ae0b4e706b42650,PodSandboxId:0af544a08520eaf859764a25488be6287e7f26ccab73dcc333b4e2adf216db6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726777514053779153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726777513973523845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726777514005950750,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726777513895313077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726777513844638755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d,PodSandboxId:ddd45b63ed24668d280fde66288d11234dab7e841444d8c7d4b9b2d2dbfc653f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726777469349246386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3,PodSandboxId:7d0fe7542083018acecb845e90e843f11df6dc04bc0f7b3272515faebbb52edb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726777468921716594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cf64c3e-3db2-491d-9cf6-83698b273a24 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.564801313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39de4990-b566-4869-bb67-4255d62be5a3 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.564897027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39de4990-b566-4869-bb67-4255d62be5a3 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.566035979Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8188611d-f05d-442b-9fd1-49d9a3ed9255 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.566882050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777546566851306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8188611d-f05d-442b-9fd1-49d9a3ed9255 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.567393818Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9cffa49-fe07-4684-af72-217eb75ae171 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.567464044Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9cffa49-fe07-4684-af72-217eb75ae171 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.567798871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de8a09a54a42c156151203cf80a494b94bef7c73fae0a05bb5688ce9b28ca67c,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726777527204269816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e80bcabc5ca3d9d4349b250687457360e4de3f5dd89703acb3893b93321e09f,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726777527174258507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e843ffd50bd660fb7e78baa93b67b7746b8af6d27b94c18b6496f0e90b9155f,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726777527193913824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d97dcc85cdba7cab90e81240c278a75d7fc25d02b77a2b074daf3d47a45621,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726777527184981401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380634a5fb6fe0144cdd4083f6dd943bf7959be1b5b600b745ddf354e0ef297b,PodSandboxId:9266d27c0ea5274dda2248e6e4a217a8a1ce2ed38fcc186bc2a015898d880329,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726777514754281739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52410b8c9de7dbf380fcd01b1db2956cdadda87967c76b041ae0b4e706b42650,PodSandboxId:0af544a08520eaf859764a25488be6287e7f26ccab73dcc333b4e2adf216db6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726777514053779153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726777513973523845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726777514005950750,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726777513895313077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726777513844638755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d,PodSandboxId:ddd45b63ed24668d280fde66288d11234dab7e841444d8c7d4b9b2d2dbfc653f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726777469349246386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3,PodSandboxId:7d0fe7542083018acecb845e90e843f11df6dc04bc0f7b3272515faebbb52edb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726777468921716594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9cffa49-fe07-4684-af72-217eb75ae171 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.625539184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc0f8b4e-a17e-4224-b430-70f1f29d6cc6 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.625619127Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc0f8b4e-a17e-4224-b430-70f1f29d6cc6 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.627185278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8199edf5-2c38-492b-b96c-f1586a009532 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.628356871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777546628329039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8199edf5-2c38-492b-b96c-f1586a009532 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.629121462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18d9b037-208a-4562-9b8c-3fe2b8de4c2c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.629173211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18d9b037-208a-4562-9b8c-3fe2b8de4c2c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.629467066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de8a09a54a42c156151203cf80a494b94bef7c73fae0a05bb5688ce9b28ca67c,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726777527204269816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e80bcabc5ca3d9d4349b250687457360e4de3f5dd89703acb3893b93321e09f,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726777527174258507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e843ffd50bd660fb7e78baa93b67b7746b8af6d27b94c18b6496f0e90b9155f,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726777527193913824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d97dcc85cdba7cab90e81240c278a75d7fc25d02b77a2b074daf3d47a45621,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726777527184981401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380634a5fb6fe0144cdd4083f6dd943bf7959be1b5b600b745ddf354e0ef297b,PodSandboxId:9266d27c0ea5274dda2248e6e4a217a8a1ce2ed38fcc186bc2a015898d880329,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726777514754281739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52410b8c9de7dbf380fcd01b1db2956cdadda87967c76b041ae0b4e706b42650,PodSandboxId:0af544a08520eaf859764a25488be6287e7f26ccab73dcc333b4e2adf216db6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726777514053779153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726777513973523845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726777514005950750,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726777513895313077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726777513844638755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d,PodSandboxId:ddd45b63ed24668d280fde66288d11234dab7e841444d8c7d4b9b2d2dbfc653f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726777469349246386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3,PodSandboxId:7d0fe7542083018acecb845e90e843f11df6dc04bc0f7b3272515faebbb52edb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726777468921716594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18d9b037-208a-4562-9b8c-3fe2b8de4c2c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.688902053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61932be1-6a49-4b17-9445-1e0079835631 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.689074374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61932be1-6a49-4b17-9445-1e0079835631 name=/runtime.v1.RuntimeService/Version
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.690807688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b08937eb-091c-48c6-a42d-6e6eeaff05d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.691553782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777546691488188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b08937eb-091c-48c6-a42d-6e6eeaff05d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.692258671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a8d18c8-6e29-47dd-8d8b-035808fa1694 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.692357436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a8d18c8-6e29-47dd-8d8b-035808fa1694 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 20:25:46 pause-670672 crio[2320]: time="2024-09-19 20:25:46.692673161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de8a09a54a42c156151203cf80a494b94bef7c73fae0a05bb5688ce9b28ca67c,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726777527204269816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e80bcabc5ca3d9d4349b250687457360e4de3f5dd89703acb3893b93321e09f,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726777527174258507,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e843ffd50bd660fb7e78baa93b67b7746b8af6d27b94c18b6496f0e90b9155f,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726777527193913824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6d97dcc85cdba7cab90e81240c278a75d7fc25d02b77a2b074daf3d47a45621,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726777527184981401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380634a5fb6fe0144cdd4083f6dd943bf7959be1b5b600b745ddf354e0ef297b,PodSandboxId:9266d27c0ea5274dda2248e6e4a217a8a1ce2ed38fcc186bc2a015898d880329,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726777514754281739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52410b8c9de7dbf380fcd01b1db2956cdadda87967c76b041ae0b4e706b42650,PodSandboxId:0af544a08520eaf859764a25488be6287e7f26ccab73dcc333b4e2adf216db6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726777514053779153,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io
.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7,PodSandboxId:3e1c527f24a39f42e41f38af9342cc7e0e8958a124f5f231f52eeb5f113f9bcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726777513973523845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716a3c3519e4de3363bc1ab1d98f6763,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9,PodSandboxId:5d9cbfbf07e498f4fc09bc2679d1ec5bada351e5ed103d24ef76f82c96749d6c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726777514005950750,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 179d596e0ace88d24ae2cbcfd254ccf6,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a,PodSandboxId:e579f010957666dbef9313708daf6b6a34fc6ed3498f5534485aa9e4a72f618c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726777513895313077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7394ade2d983f4ac0e2571a895778847,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a,PodSandboxId:4ac4459f40399b030bcfc3510f97e151a4a46dd4fab454de67bb61708a514a62,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726777513844638755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670672,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ac3bf65cdda897b51fc9d549a6c2ae3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d,PodSandboxId:ddd45b63ed24668d280fde66288d11234dab7e841444d8c7d4b9b2d2dbfc653f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726777469349246386,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jmxnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df399b3f-dbdd-4a65-a9e9-1fdcc76ea2d2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3,PodSandboxId:7d0fe7542083018acecb845e90e843f11df6dc04bc0f7b3272515faebbb52edb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726777468921716594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jb8pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 24fc727e-56bc-48fc-bb7d-6fd965042da0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a8d18c8-6e29-47dd-8d8b-035808fa1694 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	de8a09a54a42c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   19 seconds ago       Running             kube-scheduler            2                   4ac4459f40399       kube-scheduler-pause-670672
	6e843ffd50bd6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 seconds ago       Running             kube-apiserver            2                   3e1c527f24a39       kube-apiserver-pause-670672
	a6d97dcc85cdb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   19 seconds ago       Running             kube-controller-manager   2                   5d9cbfbf07e49       kube-controller-manager-pause-670672
	9e80bcabc5ca3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   19 seconds ago       Running             etcd                      2                   e579f01095766       etcd-pause-670672
	380634a5fb6fe       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   32 seconds ago       Running             coredns                   1                   9266d27c0ea52       coredns-7c65d6cfc9-jmxnk
	52410b8c9de7d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   32 seconds ago       Running             kube-proxy                1                   0af544a08520e       kube-proxy-jb8pb
	aa0824d46c53f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   32 seconds ago       Exited              kube-controller-manager   1                   5d9cbfbf07e49       kube-controller-manager-pause-670672
	ca11197e6bec7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   32 seconds ago       Exited              kube-apiserver            1                   3e1c527f24a39       kube-apiserver-pause-670672
	ecc20d16db725       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   32 seconds ago       Exited              etcd                      1                   e579f01095766       etcd-pause-670672
	22f2310aed4d6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   32 seconds ago       Exited              kube-scheduler            1                   4ac4459f40399       kube-scheduler-pause-670672
	b1553793a79d7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   ddd45b63ed246       coredns-7c65d6cfc9-jmxnk
	4cae729f51a0f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                0                   7d0fe75420830       kube-proxy-jb8pb
	
	
	==> coredns [380634a5fb6fe0144cdd4083f6dd943bf7959be1b5b600b745ddf354e0ef297b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35903 - 54935 "HINFO IN 3485892769307809059.5083327303141623088. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.209846883s
	
	
	==> coredns [b1553793a79d76ab21d79f75ee8d222ead4e123e7d38a63d7bb13807eb657d0d] <==
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[203332907]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 20:24:29.577) (total time: 30003ms):
	Trace[203332907]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:24:59.578)
	Trace[203332907]: [30.003329823s] [30.003329823s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1353835626]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 20:24:29.580) (total time: 30002ms):
	Trace[1353835626]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:24:59.582)
	Trace[1353835626]: [30.002148747s] [30.002148747s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2003631328]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 20:24:29.578) (total time: 30006ms):
	Trace[2003631328]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30006ms (20:24:59.584)
	Trace[2003631328]: [30.00621084s] [30.00621084s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57352 - 42934 "HINFO IN 8696267387429721357.7079103495970484133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015361275s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-670672
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-670672
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=pause-670672
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T20_24_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 20:24:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-670672
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 20:25:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 20:25:31 +0000   Thu, 19 Sep 2024 20:24:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 20:25:31 +0000   Thu, 19 Sep 2024 20:24:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 20:25:31 +0000   Thu, 19 Sep 2024 20:24:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 20:25:31 +0000   Thu, 19 Sep 2024 20:24:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    pause-670672
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 26db837093c84f2ca63323ffe31863d7
	  System UUID:                26db8370-93c8-4f2c-a633-23ffe31863d7
	  Boot ID:                    44145e9b-ac01-4a7e-a8ad-2aad874c57bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jmxnk                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     79s
	  kube-system                 etcd-pause-670672                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         84s
	  kube-system                 kube-apiserver-pause-670672             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-670672    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-jb8pb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-670672             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 77s                kube-proxy       
	  Normal  Starting                 29s                kube-proxy       
	  Normal  NodeHasSufficientPID     90s (x7 over 90s)  kubelet          Node pause-670672 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node pause-670672 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node pause-670672 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  84s                kubelet          Node pause-670672 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s                kubelet          Node pause-670672 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s                kubelet          Node pause-670672 status is now: NodeHasSufficientPID
	  Normal  NodeReady                83s                kubelet          Node pause-670672 status is now: NodeReady
	  Normal  RegisteredNode           80s                node-controller  Node pause-670672 event: Registered Node pause-670672 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node pause-670672 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node pause-670672 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node pause-670672 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                node-controller  Node pause-670672 event: Registered Node pause-670672 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.230249] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.065592] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057565] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.174666] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.160294] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.294651] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +4.090307] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +5.433575] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.058227] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.002611] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.081212] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.316546] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[  +0.106926] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.709554] kauditd_printk_skb: 99 callbacks suppressed
	[Sep19 20:25] systemd-fstab-generator[2246]: Ignoring "noauto" option for root device
	[  +0.138160] systemd-fstab-generator[2258]: Ignoring "noauto" option for root device
	[  +0.181402] systemd-fstab-generator[2272]: Ignoring "noauto" option for root device
	[  +0.153744] systemd-fstab-generator[2284]: Ignoring "noauto" option for root device
	[  +0.318893] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	[  +0.717615] systemd-fstab-generator[2432]: Ignoring "noauto" option for root device
	[  +4.301239] kauditd_printk_skb: 196 callbacks suppressed
	[  +9.263094] systemd-fstab-generator[3249]: Ignoring "noauto" option for root device
	[  +8.401310] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.386313] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	
	
	==> etcd [9e80bcabc5ca3d9d4349b250687457360e4de3f5dd89703acb3893b93321e09f] <==
	{"level":"warn","ts":"2024-09-19T20:25:36.895043Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180177Z","time spent":"714.833576ms","remote":"127.0.0.1:39288","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:457 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2024-09-19T20:25:36.895135Z","caller":"traceutil/trace.go:171","msg":"trace[682819107] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"714.665673ms","start":"2024-09-19T20:25:36.180461Z","end":"2024-09-19T20:25:36.895127Z","steps":["trace[682819107] 'process raft request'  (duration: 714.392157ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:25:36.895158Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180417Z","time spent":"714.730153ms","remote":"127.0.0.1:39604","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:455 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2024-09-19T20:25:36.895281Z","caller":"traceutil/trace.go:171","msg":"trace[1102741383] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"627.760741ms","start":"2024-09-19T20:25:36.267508Z","end":"2024-09-19T20:25:36.895269Z","steps":["trace[1102741383] 'process raft request'  (duration: 627.404835ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T20:25:36.895314Z","caller":"traceutil/trace.go:171","msg":"trace[1753253500] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:498; }","duration":"714.985149ms","start":"2024-09-19T20:25:36.180322Z","end":"2024-09-19T20:25:36.895307Z","steps":["trace[1753253500] 'read index received'  (duration: 82.093953ms)","trace[1753253500] 'applied index is now lower than readState.Index'  (duration: 632.890646ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T20:25:36.895347Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.267490Z","time spent":"627.819506ms","remote":"127.0.0.1:39418","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-6kxlp\" mod_revision:460 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-6kxlp\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-6kxlp\" > >"}
	{"level":"warn","ts":"2024-09-19T20:25:36.895449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"715.119349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2024-09-19T20:25:36.895468Z","caller":"traceutil/trace.go:171","msg":"trace[63308975] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:463; }","duration":"715.144188ms","start":"2024-09-19T20:25:36.180318Z","end":"2024-09-19T20:25:36.895462Z","steps":["trace[63308975] 'agreement among raft nodes before linearized reading'  (duration: 715.054259ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:25:36.895486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180286Z","time spent":"715.19366ms","remote":"127.0.0.1:39226","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":1,"response size":392,"request content":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" "}
	{"level":"warn","ts":"2024-09-19T20:25:36.895589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"715.252118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-09-19T20:25:36.895602Z","caller":"traceutil/trace.go:171","msg":"trace[1299525818] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:463; }","duration":"715.265501ms","start":"2024-09-19T20:25:36.180333Z","end":"2024-09-19T20:25:36.895598Z","steps":["trace[1299525818] 'agreement among raft nodes before linearized reading'  (duration: 715.231118ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:25:36.895615Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180317Z","time spent":"715.294836ms","remote":"127.0.0.1:39332","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":229,"request content":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" "}
	{"level":"warn","ts":"2024-09-19T20:25:36.895690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"715.254624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-670672\" ","response":"range_response_count:1 size:5851"}
	{"level":"info","ts":"2024-09-19T20:25:36.895702Z","caller":"traceutil/trace.go:171","msg":"trace[1426666188] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-670672; range_end:; response_count:1; response_revision:463; }","duration":"715.270337ms","start":"2024-09-19T20:25:36.180428Z","end":"2024-09-19T20:25:36.895698Z","steps":["trace[1426666188] 'agreement among raft nodes before linearized reading'  (duration: 715.241556ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:25:36.895713Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180419Z","time spent":"715.291396ms","remote":"127.0.0.1:39312","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5873,"request content":"key:\"/registry/pods/kube-system/etcd-pause-670672\" "}
	{"level":"warn","ts":"2024-09-19T20:25:36.895741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"715.328513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-jmxnk\" ","response":"range_response_count:1 size:5149"}
	{"level":"info","ts":"2024-09-19T20:25:36.895792Z","caller":"traceutil/trace.go:171","msg":"trace[638520599] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-jmxnk; range_end:; response_count:1; response_revision:463; }","duration":"715.377578ms","start":"2024-09-19T20:25:36.180407Z","end":"2024-09-19T20:25:36.895784Z","steps":["trace[638520599] 'agreement among raft nodes before linearized reading'  (duration: 715.300738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T20:25:36.895825Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T20:25:36.180393Z","time spent":"715.418379ms","remote":"127.0.0.1:39312","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":5171,"request content":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-jmxnk\" "}
	{"level":"info","ts":"2024-09-19T20:25:37.191252Z","caller":"traceutil/trace.go:171","msg":"trace[1162924208] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:501; }","duration":"278.704688ms","start":"2024-09-19T20:25:36.912470Z","end":"2024-09-19T20:25:37.191175Z","steps":["trace[1162924208] 'read index received'  (duration: 271.665399ms)","trace[1162924208] 'applied index is now lower than readState.Index'  (duration: 7.038758ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T20:25:37.191390Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.896388ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-670672\" ","response":"range_response_count:1 size:5851"}
	{"level":"info","ts":"2024-09-19T20:25:37.191433Z","caller":"traceutil/trace.go:171","msg":"trace[1909379005] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-670672; range_end:; response_count:1; response_revision:464; }","duration":"278.958706ms","start":"2024-09-19T20:25:36.912467Z","end":"2024-09-19T20:25:37.191426Z","steps":["trace[1909379005] 'agreement among raft nodes before linearized reading'  (duration: 278.826623ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T20:25:37.191563Z","caller":"traceutil/trace.go:171","msg":"trace[2038750672] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"281.908491ms","start":"2024-09-19T20:25:36.909642Z","end":"2024-09-19T20:25:37.191551Z","steps":["trace[2038750672] 'process raft request'  (duration: 274.477777ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T20:25:37.192241Z","caller":"traceutil/trace.go:171","msg":"trace[1305136966] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"278.743593ms","start":"2024-09-19T20:25:36.913434Z","end":"2024-09-19T20:25:37.192178Z","steps":["trace[1305136966] 'process raft request'  (duration: 278.618176ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T20:25:37.192373Z","caller":"traceutil/trace.go:171","msg":"trace[1673847013] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"278.351969ms","start":"2024-09-19T20:25:36.914014Z","end":"2024-09-19T20:25:37.192366Z","steps":["trace[1673847013] 'process raft request'  (duration: 278.113834ms)"],"step_count":1}
	
	
	==> etcd [ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a] <==
	{"level":"info","ts":"2024-09-19T20:25:15.459920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-19T20:25:15.459978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e received MsgPreVoteResp from 32f03a72bea6354e at term 2"}
	{"level":"info","ts":"2024-09-19T20:25:15.460016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e became candidate at term 3"}
	{"level":"info","ts":"2024-09-19T20:25:15.460041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e received MsgVoteResp from 32f03a72bea6354e at term 3"}
	{"level":"info","ts":"2024-09-19T20:25:15.460068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e became leader at term 3"}
	{"level":"info","ts":"2024-09-19T20:25:15.460093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 32f03a72bea6354e elected leader 32f03a72bea6354e at term 3"}
	{"level":"info","ts":"2024-09-19T20:25:15.462028Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T20:25:15.462263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T20:25:15.462610Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T20:25:15.462649Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T20:25:15.462066Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"32f03a72bea6354e","local-member-attributes":"{Name:pause-670672 ClientURLs:[https://192.168.39.136:2379]}","request-path":"/0/members/32f03a72bea6354e/attributes","cluster-id":"6fc8639e731f3dca","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T20:25:15.463482Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T20:25:15.463702Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T20:25:15.464562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.136:2379"}
	{"level":"info","ts":"2024-09-19T20:25:15.464803Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T20:25:24.718660Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-19T20:25:24.718720Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-670672","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.136:2380"],"advertise-client-urls":["https://192.168.39.136:2379"]}
	{"level":"warn","ts":"2024-09-19T20:25:24.718829Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:25:24.718859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:25:24.720472Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.136:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T20:25:24.720512Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.136:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-19T20:25:24.721875Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"32f03a72bea6354e","current-leader-member-id":"32f03a72bea6354e"}
	{"level":"info","ts":"2024-09-19T20:25:24.725662Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.136:2380"}
	{"level":"info","ts":"2024-09-19T20:25:24.725742Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.136:2380"}
	{"level":"info","ts":"2024-09-19T20:25:24.725753Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-670672","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.136:2380"],"advertise-client-urls":["https://192.168.39.136:2379"]}
	
	
	==> kernel <==
	 20:25:47 up 1 min,  0 users,  load average: 0.76, 0.34, 0.12
	Linux pause-670672 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6e843ffd50bd660fb7e78baa93b67b7746b8af6d27b94c18b6496f0e90b9155f] <==
	I0919 20:25:31.192439       1 aggregator.go:171] initial CRD sync complete...
	I0919 20:25:31.192483       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 20:25:31.192493       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 20:25:31.192500       1 cache.go:39] Caches are synced for autoregister controller
	I0919 20:25:31.194367       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 20:25:31.194400       1 policy_source.go:224] refreshing policies
	I0919 20:25:31.201054       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 20:25:31.201529       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0919 20:25:31.201580       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0919 20:25:31.206387       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0919 20:25:31.217032       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 20:25:31.247496       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 20:25:31.247181       1 shared_informer.go:320] Caches are synced for configmaps
	I0919 20:25:31.248528       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0919 20:25:31.271442       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0919 20:25:31.277405       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0919 20:25:32.051481       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 20:25:32.286474       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.136]
	I0919 20:25:32.288031       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 20:25:32.298083       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 20:25:32.468618       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0919 20:25:32.482053       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0919 20:25:32.518520       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 20:25:32.554243       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 20:25:32.561090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7] <==
	I0919 20:25:17.091952       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0919 20:25:17.092032       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 20:25:17.092391       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 20:25:17.091376       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 20:25:17.095404       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0919 20:25:17.100134       1 controller.go:157] Shutting down quota evaluator
	I0919 20:25:17.100488       1 controller.go:176] quota evaluator worker shutdown
	I0919 20:25:17.100593       1 controller.go:176] quota evaluator worker shutdown
	I0919 20:25:17.100692       1 controller.go:176] quota evaluator worker shutdown
	I0919 20:25:17.100717       1 controller.go:176] quota evaluator worker shutdown
	I0919 20:25:17.100739       1 controller.go:176] quota evaluator worker shutdown
	W0919 20:25:17.821751       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:17.822668       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:18.821016       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:18.822598       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:19.821050       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:19.822747       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:20.821347       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:20.823368       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:21.821857       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:21.822255       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:22.822114       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:22.822134       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0919 20:25:23.820979       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0919 20:25:23.822762       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [a6d97dcc85cdba7cab90e81240c278a75d7fc25d02b77a2b074daf3d47a45621] <==
	I0919 20:25:34.493245       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0919 20:25:34.495648       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0919 20:25:34.499132       1 shared_informer.go:320] Caches are synced for ephemeral
	I0919 20:25:34.500379       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0919 20:25:34.501792       1 shared_informer.go:320] Caches are synced for GC
	I0919 20:25:34.507166       1 shared_informer.go:320] Caches are synced for taint
	I0919 20:25:34.507371       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 20:25:34.507620       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-670672"
	I0919 20:25:34.507796       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 20:25:34.513151       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0919 20:25:34.514470       1 shared_informer.go:320] Caches are synced for TTL
	I0919 20:25:34.514592       1 shared_informer.go:320] Caches are synced for attach detach
	I0919 20:25:34.636673       1 shared_informer.go:320] Caches are synced for deployment
	I0919 20:25:34.663614       1 shared_informer.go:320] Caches are synced for disruption
	I0919 20:25:34.673462       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 20:25:34.677015       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 20:25:34.714056       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0919 20:25:34.714316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="165.835µs"
	I0919 20:25:35.108532       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 20:25:35.131164       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 20:25:35.131254       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 20:25:37.195521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.025699591s"
	I0919 20:25:37.195627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="54.473µs"
	I0919 20:25:37.229510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.928424ms"
	I0919 20:25:37.229773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.811µs"
	
	
	==> kube-controller-manager [aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9] <==
	
	
	==> kube-proxy [4cae729f51a0fc176b78fb531951fef78bd0978b51fc5f46985bc44788b9e8e3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 20:24:29.619313       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 20:24:29.636506       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.136"]
	E0919 20:24:29.636877       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 20:24:29.688522       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 20:24:29.688577       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 20:24:29.688612       1 server_linux.go:169] "Using iptables Proxier"
	I0919 20:24:29.693718       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 20:24:29.694628       1 server.go:483] "Version info" version="v1.31.1"
	I0919 20:24:29.694684       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:24:29.698245       1 config.go:199] "Starting service config controller"
	I0919 20:24:29.698444       1 config.go:105] "Starting endpoint slice config controller"
	I0919 20:24:29.698734       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 20:24:29.698814       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 20:24:29.699149       1 config.go:328] "Starting node config controller"
	I0919 20:24:29.699317       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 20:24:29.799411       1 shared_informer.go:320] Caches are synced for service config
	I0919 20:24:29.799507       1 shared_informer.go:320] Caches are synced for node config
	I0919 20:24:29.800681       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [52410b8c9de7dbf380fcd01b1db2956cdadda87967c76b041ae0b4e706b42650] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 20:25:15.180664       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 20:25:16.988328       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.136"]
	E0919 20:25:16.988494       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 20:25:17.039667       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0919 20:25:17.039724       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 20:25:17.039753       1 server_linux.go:169] "Using iptables Proxier"
	I0919 20:25:17.042467       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 20:25:17.043162       1 server.go:483] "Version info" version="v1.31.1"
	I0919 20:25:17.043242       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:25:17.044639       1 config.go:199] "Starting service config controller"
	I0919 20:25:17.044685       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 20:25:17.044718       1 config.go:105] "Starting endpoint slice config controller"
	I0919 20:25:17.044738       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 20:25:17.046659       1 config.go:328] "Starting node config controller"
	I0919 20:25:17.046693       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 20:25:17.145092       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 20:25:17.145184       1 shared_informer.go:320] Caches are synced for service config
	I0919 20:25:17.146765       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a] <==
	I0919 20:25:15.112233       1 serving.go:386] Generated self-signed cert in-memory
	W0919 20:25:16.921996       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 20:25:16.922100       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 20:25:16.922136       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 20:25:16.922255       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 20:25:16.993558       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0919 20:25:16.993652       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:25:16.996322       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 20:25:16.996406       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:25:16.999663       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 20:25:16.999739       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 20:25:17.097481       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:25:24.861995       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0919 20:25:24.862124       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0919 20:25:24.862831       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [de8a09a54a42c156151203cf80a494b94bef7c73fae0a05bb5688ce9b28ca67c] <==
	I0919 20:25:28.469558       1 serving.go:386] Generated self-signed cert in-memory
	I0919 20:25:31.277068       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0919 20:25:31.277297       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 20:25:31.286549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 20:25:31.286665       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 20:25:31.286898       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:25:31.286962       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 20:25:31.286989       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 20:25:31.287013       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0919 20:25:31.286589       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0919 20:25:31.287168       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0919 20:25:31.387724       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 20:25:31.388281       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0919 20:25:31.388617       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Sep 19 20:25:26 pause-670672 kubelet[3256]: I0919 20:25:26.903308    3256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7394ade2d983f4ac0e2571a895778847-etcd-certs\") pod \"etcd-pause-670672\" (UID: \"7394ade2d983f4ac0e2571a895778847\") " pod="kube-system/etcd-pause-670672"
	Sep 19 20:25:26 pause-670672 kubelet[3256]: E0919 20:25:26.904697    3256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-670672?timeout=10s\": dial tcp 192.168.39.136:8443: connect: connection refused" interval="400ms"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.083037    3256 kubelet_node_status.go:72] "Attempting to register node" node="pause-670672"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: E0919 20:25:27.083956    3256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.136:8443: connect: connection refused" node="pause-670672"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.156961    3256 scope.go:117] "RemoveContainer" containerID="ca11197e6bec76f2e6ae424c7e0149a1ae6e345c0071077b76d577aba9a089d7"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.157374    3256 scope.go:117] "RemoveContainer" containerID="ecc20d16db725142d47364848f43fe3e205aeef14a2ac66ed3f60fcbb1f0745a"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.159362    3256 scope.go:117] "RemoveContainer" containerID="aa0824d46c53f8e0a62a6de8684939a07eab36d2d1f915b1dd87b4095d0e13e9"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.160135    3256 scope.go:117] "RemoveContainer" containerID="22f2310aed4d6235e1db1cb7b4691c0b404829bbe7b5d29fc8d57448fceea46a"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: E0919 20:25:27.306710    3256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-670672?timeout=10s\": dial tcp 192.168.39.136:8443: connect: connection refused" interval="800ms"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: I0919 20:25:27.485866    3256 kubelet_node_status.go:72] "Attempting to register node" node="pause-670672"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: E0919 20:25:27.486686    3256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.136:8443: connect: connection refused" node="pause-670672"
	Sep 19 20:25:27 pause-670672 kubelet[3256]: E0919 20:25:27.504640    3256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.136:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-670672.17f6bf02fc3a1670  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-670672,UID:pause-670672,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:pause-670672,},FirstTimestamp:2024-09-19 20:25:26.685668976 +0000 UTC m=+0.113961314,LastTimestamp:2024-09-19 20:25:26.685668976 +0000 UTC m=+0.113961314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-670672,}"
	Sep 19 20:25:28 pause-670672 kubelet[3256]: I0919 20:25:28.287970    3256 kubelet_node_status.go:72] "Attempting to register node" node="pause-670672"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.316817    3256 kubelet_node_status.go:111] "Node was previously registered" node="pause-670672"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.317500    3256 kubelet_node_status.go:75] "Successfully registered node" node="pause-670672"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.317665    3256 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.319121    3256 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.684566    3256 apiserver.go:52] "Watching apiserver"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.698797    3256 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.743425    3256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24fc727e-56bc-48fc-bb7d-6fd965042da0-xtables-lock\") pod \"kube-proxy-jb8pb\" (UID: \"24fc727e-56bc-48fc-bb7d-6fd965042da0\") " pod="kube-system/kube-proxy-jb8pb"
	Sep 19 20:25:31 pause-670672 kubelet[3256]: I0919 20:25:31.743566    3256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24fc727e-56bc-48fc-bb7d-6fd965042da0-lib-modules\") pod \"kube-proxy-jb8pb\" (UID: \"24fc727e-56bc-48fc-bb7d-6fd965042da0\") " pod="kube-system/kube-proxy-jb8pb"
	Sep 19 20:25:36 pause-670672 kubelet[3256]: E0919 20:25:36.784029    3256 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777536783419224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:25:36 pause-670672 kubelet[3256]: E0919 20:25:36.784518    3256 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777536783419224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:25:46 pause-670672 kubelet[3256]: E0919 20:25:46.788422    3256 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777546787171674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 20:25:46 pause-670672 kubelet[3256]: E0919 20:25:46.788464    3256 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726777546787171674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 20:25:46.211673   61875 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19664-7917/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-670672 -n pause-670672
helpers_test.go:261: (dbg) Run:  kubectl --context pause-670672 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (43.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7200.053s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-245476 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0919 20:36:59.648885   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/bridge-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:37:09.496921   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/custom-flannel-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:37:40.385087   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/kindnet-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:37:40.610936   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/bridge-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:37:43.766853   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/auto-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:37:53.861496   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/enable-default-cni-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:38:08.088073   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/kindnet-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:38:11.469142   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/auto-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:38:14.351765   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/flannel-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:38:22.478294   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/calico-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:38:50.178948   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/calico-801740/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:38:59.334884   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:39:02.532889   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/bridge-801740/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
	running tests:
		TestKubernetesUpgrade (16m15s)
		TestNetworkPlugins (18m30s)
		TestNetworkPlugins/group (7m19s)
		TestStartStop (18m17s)
		TestStartStop/group/embed-certs (7m19s)
		TestStartStop/group/embed-certs/serial (7m19s)
		TestStartStop/group/embed-certs/serial/SecondStart (3m33s)
		TestStartStop/group/no-preload (8m2s)
		TestStartStop/group/no-preload/serial (8m2s)
		TestStartStop/group/no-preload/serial/SecondStart (3m40s)
		TestStartStop/group/old-k8s-version (8m27s)
		TestStartStop/group/old-k8s-version/serial (8m27s)
		TestStartStop/group/old-k8s-version/serial/SecondStart (2m6s)

                                                
                                                
goroutine 3349 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 13 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00067cea0, 0xc000731bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc000546108, {0x4588140, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x4644680?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc00080d860)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00080d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006b7e80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2744 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3221b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2740
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2298 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3221b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2297
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 175 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7fa2378ce0b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000116100?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000116100)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc000116100)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00028ac40)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00028ac40)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc000165860, {0x321e9b0, 0xc00028ac40})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc000165860)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0006fc1a0?, 0xc0006fc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 172
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2299 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006c0d40, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2297
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3327 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7fa2378ce4d0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00065e480?, 0xc001336269?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00065e480, {0xc001336269, 0x597, 0x597})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009d0b0, {0xc001336269?, 0x4917c0?, 0x206?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00086f5f0, {0x3205680, 0xc0019cc668})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3205800, 0xc00086f5f0}, {0x3205680, 0xc0019cc668}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00009d0b0?, {0x3205800, 0xc00086f5f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00009d0b0, {0x3205800, 0xc00086f5f0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3205800, 0xc00086f5f0}, {0x3205700, 0xc00009d0b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0009bf800?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3326
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 2460 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3221b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2459
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2186 [chan receive, 19 minutes]:
testing.(*testContext).waitParallel(0xc0004cc870)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001827860)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001827860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001827860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001827860, 0xc0006c04c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2181
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3056 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3055
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2926 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0019d1950, 0x10)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00136ed80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3244680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019d1980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000657c30, {0x3206d40, 0xc00133d1d0}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000657c30, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2849
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2601 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x322b280, 0xc000064310}, 0xc001468750, 0xc001468798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x322b280, 0xc000064310}, 0xd0?, 0xc001468750, 0xc001468798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x322b280?, 0xc000064310?}, 0xc001826340?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc00020b500?, 0xc00080e4d0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2461
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2181 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc001826d00, 0x2f0a9f0)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1748
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2387 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3221b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2303
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1799 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc0004cc870)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1666 +0x5e5
testing.tRunner(0xc00067d040, 0xc0005fe138)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1682
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2182 [chan receive, 8 minutes]:
testing.(*T).Run(0xc001827040, {0x258f061?, 0x0?}, 0xc001b6c600)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001827040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001827040, 0xc0006c03c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2181
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3219 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x322b280, 0xc000064310}, 0xc0012f4f50, 0xc0012a6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x322b280, 0xc000064310}, 0x30?, 0xc0012f4f50, 0xc0012f4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x322b280?, 0xc000064310?}, 0xc00149aa50?, 0xc00065bc50?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc0016f4f00?, 0xc001894930?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3190
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1682 [chan receive, 19 minutes]:
testing.(*T).Run(0xc00067c1a0, {0x258dd1c?, 0x55917c?}, 0xc0005fe138)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00067c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc00067c1a0, 0x2f0a7b0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 396 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 395
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2187 [chan receive, 7 minutes]:
testing.(*T).Run(0xc001827a00, {0x258f061?, 0x0?}, 0xc001b6c080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001827a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001827a00, 0xc0006c0540)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2181
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2871 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x322b280, 0xc000064310}, 0xc001b87f50, 0xc001b87f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x322b280, 0xc000064310}, 0xce?, 0xc001b87f50, 0xc001b87f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x322b280?, 0xc000064310?}, 0xc0002876c0?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014347d0?, 0x593ba4?, 0xc001e4d410?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2836
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2591 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2590
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2185 [chan receive, 8 minutes]:
testing.(*T).Run(0xc001827520, {0x258f061?, 0x0?}, 0xc001c18080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001827520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001827520, 0xc0006c0480)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2181
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2371 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x322b280, 0xc000064310}, 0xc001299f50, 0xc001299f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x322b280, 0xc000064310}, 0x0?, 0xc001299f50, 0xc001299f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x322b280?, 0xc000064310?}, 0xc00067c1a0?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012c8fd0?, 0x593ba4?, 0xc001c36000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2299
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2404 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x322b280, 0xc000064310}, 0xc0012c9750, 0xc0000abf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x322b280, 0xc000064310}, 0x90?, 0xc0012c9750, 0xc0012c9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x322b280?, 0xc000064310?}, 0x3207880?, 0xc001836880?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012c97d0?, 0x593ba4?, 0xc00086f080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2388
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2461 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00087a600, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2459
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3069 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000828680, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3078
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3293 [syscall, 3 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x14, 0xc0000adb30, 0x4, 0xc001c7a510, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc001c66348?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0016f4300)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0016f4300)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc00067da00, 0xc0016f4300)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x322b078, 0xc000410a10}, 0xc00067da00, {0xc0013b8960, 0x11}, {0x0?, 0xc0012c8760?}, {0x559033?, 0x4b162f?}, {0xc0001d1400, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00067da00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00067da00, 0xc001c18400)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3039
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 395 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x322b280, 0xc000064310}, 0xc00009b750, 0xc0000acf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x322b280, 0xc000064310}, 0xa0?, 0xc00009b750, 0xc00009b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x322b280?, 0xc000064310?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc0008a7080?, 0xc0014a69a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 412
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 394 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009bdad0, 0x23)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0012a9d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3244680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009bdf00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004796f0, {0x3206d40, 0xc001304de0}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004796f0, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 412
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3308 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c28900, 0xc001c86f50)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3305
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2372 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2371
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 586 [chan send, 74 minutes]:
os/exec.(*Cmd).watchCtx(0xc001d0be00, 0xc000065110)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 316
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 411 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3221b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 357
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 569 [chan send, 74 minutes]:
os/exec.(*Cmd).watchCtx(0xc0008a7680, 0xc0014a71f0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 568
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 445 [chan send, 74 minutes]:
os/exec.(*Cmd).watchCtx(0xc001876180, 0xc001825260)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 444
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2872 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2871
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3329 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc000206a80, 0xc001c37730)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3326
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3326 [syscall, 3 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x13, 0xc00146fb30, 0x4, 0xc001445b90, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc001918a98?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000206a80)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc000206a80)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0006fc4e0, 0xc000206a80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x322b078, 0xc0004119d0}, 0xc0006fc4e0, {0xc0013b8e70, 0x16}, {0x0?, 0xc0012f3760?}, {0x559033?, 0x4b162f?}, {0xc0016f5200, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0006fc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0006fc4e0, 0xc0009bf800)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2978
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2184 [chan receive, 19 minutes]:
testing.(*testContext).waitParallel(0xc0004cc870)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001827380)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001827380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001827380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001827380, 0xc0006c0440)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2181
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 809 [select, 74 minutes]:
net/http.(*persistConn).readLoop(0xc00184e6c0)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 807
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 3218 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00087a850, 0x1)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00072dd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3244680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00087a880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e367a0, {0x3206d40, 0xc001cee210}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e367a0, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3190
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 412 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009bdf00, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 357
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3068 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3221b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3078
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2403 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00028b3d0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001487d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3244680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00028b400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019176f0, {0x3206d40, 0xc00065a6f0}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019176f0, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2388
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2370 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0006c0d10, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00136fd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3244680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006c0d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001874070, {0x3206d40, 0xc001d32240}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001874070, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2299
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2388 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00028b400, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2303
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1748 [chan receive, 19 minutes]:
testing.(*T).Run(0xc00197cb60, {0x258dd1c?, 0x559033?}, 0x2f0a9f0)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00197cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00197cb60, 0x2f0a7f8)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3055 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x322b280, 0xc000064310}, 0xc0012f5750, 0xc0012f5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x322b280, 0xc000064310}, 0x30?, 0xc0012f5750, 0xc0012f5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x322b280?, 0xc000064310?}, 0x1?, 0x6?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc001c29380?, 0xc001374930?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3069
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1752 [syscall, 10 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x16, 0xc00072f8d8, 0x4, 0xc001c7a480, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc001c662a0?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001e4a480)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001e4a480)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc00197d1e0, 0xc001e4a480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00197d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:275 +0x141f
testing.tRunner(0xc00197d1e0, 0x2f0a778)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3220 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3219
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3189 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3221b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3202
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3054 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000828610, 0x10)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0012a4d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3244680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000828680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001cf25a0, {0x3206d40, 0xc001d32030}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001cf25a0, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3069
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2928 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2927
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2927 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x322b280, 0xc000064310}, 0xc0012f9750, 0xc0012f9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x322b280, 0xc000064310}, 0x0?, 0xc0012f9750, 0xc0012f9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x322b280?, 0xc000064310?}, 0x9e92b6?, 0xc0008a6f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012f97d0?, 0x593ba4?, 0xc0012f97a8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2849
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2571 [IO wait]:
internal/poll.runtime_pollWait(0x7fa2378cdc90, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00149e600?, 0xc00132c2c4?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00149e600, {0xc00132c2c4, 0x53c, 0x53c})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006ba470, {0xc00132c2c4?, 0xc001437d58?, 0x208?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001304690, {0x3205680, 0xc00009c890})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3205800, 0xc001304690}, {0x3205680, 0xc00009c890}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006ba470?, {0x3205800, 0xc001304690})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0006ba470, {0x3205800, 0xc001304690})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3205800, 0xc001304690}, {0x3205700, 0xc0006ba470}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0015a4180?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1752
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 810 [select, 74 minutes]:
net/http.(*persistConn).writeLoop(0xc00184e6c0)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 807
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 2572 [IO wait]:
internal/poll.runtime_pollWait(0x7fa2378ce5d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00149e6c0?, 0xc001efb5dc?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00149e6c0, {0xc001efb5dc, 0x60a24, 0x60a24})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006ba4c8, {0xc001efb5dc?, 0x411b30?, 0x7fe7a?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0013046c0, {0x3205680, 0xc00009c898})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3205800, 0xc0013046c0}, {0x3205680, 0xc00009c898}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006ba4c8?, {0x3205800, 0xc0013046c0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0006ba4c8, {0x3205800, 0xc0013046c0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3205800, 0xc0013046c0}, {0x3205700, 0xc0006ba4c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0014a60e0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1752
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 2589 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000828d90, 0x10)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00136cd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3244680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000828dc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001916120, {0x3206d40, 0xc001a42150}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001916120, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2745
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3039 [chan receive, 3 minutes]:
testing.(*T).Run(0xc000287520, {0x25997e0?, 0xc001463570?}, 0xc001c18400)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000287520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000287520, 0xc001c18080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2185
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2849 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019d1980, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2847
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2183 [chan receive, 19 minutes]:
testing.(*testContext).waitParallel(0xc0004cc870)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0018271e0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0018271e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0018271e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0018271e0, 0xc0006c0400)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2181
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3306 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7fa2368a17a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000a7f320?, 0xc0013b6df3?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000a7f320, {0xc0013b6df3, 0x20d, 0x20d})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0019cc3f8, {0xc0013b6df3?, 0x4917c0?, 0x44?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00065af60, {0x3205680, 0xc00009c970})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3205800, 0xc00065af60}, {0x3205680, 0xc00009c970}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0019cc3f8?, {0x3205800, 0xc00065af60})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0019cc3f8, {0x3205800, 0xc00065af60})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3205800, 0xc00065af60}, {0x3205700, 0xc0019cc3f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001686500?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3305
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 2978 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0002876c0, {0x25997e0?, 0x0?}, 0xc0009bf800)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0002876c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0002876c0, 0xc001b6c600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2182
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2836 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019d01c0, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2866
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2590 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x322b280, 0xc000064310}, 0xc000506750, 0xc000506798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x322b280, 0xc000064310}, 0xe0?, 0xc000506750, 0xc000506798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x322b280?, 0xc000064310?}, 0x10000c00197d380?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005067d0?, 0x9f7625?, 0xc001e4a900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2745
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2405 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2404
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2835 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3221b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2866
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2600 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00087a550, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001483d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3244680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00087a600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e36620, {0x3206d40, 0xc001304870}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e36620, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2461
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3328 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7fa2378cdb88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00065e540?, 0xc001490ba9?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00065e540, {0xc001490ba9, 0x1457, 0x1457})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009d0e8, {0xc001490ba9?, 0x207e020?, 0x2000?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00086f620, {0x3205680, 0xc0008fc3a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3205800, 0xc00086f620}, {0x3205680, 0xc0008fc3a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00009d0e8?, {0x3205800, 0xc00086f620})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00009d0e8, {0x3205800, 0xc00086f620})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3205800, 0xc00086f620}, {0x3205700, 0xc00009d0e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0x3221b60?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3326
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3190 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00087a880, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3202
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2602 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2601
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2573 [select, 10 minutes]:
os/exec.(*Cmd).watchCtx(0xc001e4a480, 0xc0014a61c0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1752
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2870 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0019d0190, 0x10)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001b89d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3244680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019d01c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000808000, {0x3206d40, 0xc0006742a0}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000808000, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2836
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2848 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3221b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2847
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2745 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000828dc0, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2740
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3294 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7fa2368a1280, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001cd0900?, 0xc00132ca27?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001cd0900, {0xc00132ca27, 0x5d9, 0x5d9})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006ba448, {0xc00132ca27?, 0x4917c0?, 0x227?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00133c900, {0x3205680, 0xc0008fc358})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3205800, 0xc00133c900}, {0x3205680, 0xc0008fc358}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006ba448?, {0x3205800, 0xc00133c900})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0006ba448, {0x3205800, 0xc00133c900})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3205800, 0xc00133c900}, {0x3205700, 0xc0006ba448}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001c18400?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3293
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3305 [syscall, 3 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x15, 0xc001403b30, 0x4, 0xc001b54f30, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc001a44660?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001c28900)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001c28900)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc00197d6c0, 0xc001c28900)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x322b078, 0xc000448070}, 0xc00197d6c0, {0xc0005305b8, 0x12}, {0x0?, 0xc000506f60?}, {0x559033?, 0x4b162f?}, {0xc000570200, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00197d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00197d6c0, 0xc001686500)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3181
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3181 [chan receive, 3 minutes]:
testing.(*T).Run(0xc00067d520, {0x25997e0?, 0xc001469d70?}, 0xc001686500)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00067d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00067d520, 0xc001b6c080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2187
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3295 [IO wait]:
internal/poll.runtime_pollWait(0x7fa2378ce1b8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001cd0a20?, 0xc0012f052c?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001cd0a20, {0xc0012f052c, 0x1ad4, 0x1ad4})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006ba4a8, {0xc0012f052c?, 0x5?, 0x3ede?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00133c930, {0x3205680, 0xc0019cc3a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3205800, 0xc00133c930}, {0x3205680, 0xc0019cc3a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006ba4a8?, {0x3205800, 0xc00133c930})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0006ba4a8, {0x3205800, 0xc00133c930})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3205800, 0xc00133c930}, {0x3205700, 0xc0006ba4a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0009bf780?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3293
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3296 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc0016f4300, 0xc0000656c0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3293
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3307 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7fa2378ce2c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000a7f3e0?, 0xc0013d4af9?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000a7f3e0, {0xc0013d4af9, 0x1507, 0x1507})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0019cc418, {0xc0013d4af9?, 0x5?, 0x2000?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00065aff0, {0x3205680, 0xc0006ba658})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3205800, 0xc00065aff0}, {0x3205680, 0xc0006ba658}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0019cc418?, {0x3205800, 0xc00065aff0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0019cc418, {0x3205800, 0xc00065aff0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3205800, 0xc00065aff0}, {0x3205700, 0xc0019cc418}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0009bf780?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3305
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                    

Test pass (159/203)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 35.69
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.14
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 20.4
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 115.25
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
28 TestCertOptions 45.66
29 TestCertExpiration 304.95
31 TestForceSystemdFlag 65.56
32 TestForceSystemdEnv 46.1
34 TestKVMDriverInstallOrUpdate 4.52
38 TestErrorSpam/setup 42.44
39 TestErrorSpam/start 0.33
40 TestErrorSpam/status 0.74
41 TestErrorSpam/pause 1.58
42 TestErrorSpam/unpause 1.67
43 TestErrorSpam/stop 4.89
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 80.16
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 51.62
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.08
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.5
55 TestFunctional/serial/CacheCmd/cache/add_local 2.29
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
60 TestFunctional/serial/CacheCmd/cache/delete 0.09
61 TestFunctional/serial/MinikubeKubectlCmd 0.1
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
63 TestFunctional/serial/ExtraConfig 34.95
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 1.46
66 TestFunctional/serial/LogsFileCmd 1.43
67 TestFunctional/serial/InvalidService 4.48
69 TestFunctional/parallel/ConfigCmd 0.3
70 TestFunctional/parallel/DashboardCmd 18.58
71 TestFunctional/parallel/DryRun 0.3
72 TestFunctional/parallel/InternationalLanguage 0.14
73 TestFunctional/parallel/StatusCmd 1.21
77 TestFunctional/parallel/ServiceCmdConnect 10.5
78 TestFunctional/parallel/AddonsCmd 0.13
79 TestFunctional/parallel/PersistentVolumeClaim 48.18
81 TestFunctional/parallel/SSHCmd 0.4
82 TestFunctional/parallel/CpCmd 1.3
83 TestFunctional/parallel/MySQL 33.3
84 TestFunctional/parallel/FileSync 0.25
85 TestFunctional/parallel/CertSync 1.54
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
93 TestFunctional/parallel/License 0.68
103 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
105 TestFunctional/parallel/ProfileCmd/profile_list 0.34
106 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
107 TestFunctional/parallel/MountCmd/any-port 9.44
108 TestFunctional/parallel/MountCmd/specific-port 1.98
109 TestFunctional/parallel/ServiceCmd/List 0.37
110 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
111 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
112 TestFunctional/parallel/ServiceCmd/Format 0.44
113 TestFunctional/parallel/ServiceCmd/URL 0.46
114 TestFunctional/parallel/MountCmd/VerifyCleanup 1.48
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.52
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
119 TestFunctional/parallel/ImageCommands/ImageBuild 3.69
120 TestFunctional/parallel/ImageCommands/Setup 2.26
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.08
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
125 TestFunctional/parallel/Version/short 0.04
126 TestFunctional/parallel/Version/components 0.44
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.86
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.77
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.9
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
133 TestFunctional/delete_echo-server_images 0.03
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.01
139 TestMultiControlPlane/serial/StartCluster 195.36
140 TestMultiControlPlane/serial/DeployApp 7.03
141 TestMultiControlPlane/serial/PingHostFromPods 1.2
142 TestMultiControlPlane/serial/AddWorkerNode 56.69
143 TestMultiControlPlane/serial/NodeLabels 0.07
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
145 TestMultiControlPlane/serial/CopyFile 12.47
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.15
151 TestMultiControlPlane/serial/DeleteSecondaryNode 16.67
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
158 TestJSONOutput/start/Command 84.89
159 TestJSONOutput/start/Audit 0
161 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/pause/Command 0.69
165 TestJSONOutput/pause/Audit 0
167 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/unpause/Command 0.6
171 TestJSONOutput/unpause/Audit 0
173 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/stop/Command 7.36
177 TestJSONOutput/stop/Audit 0
179 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
181 TestErrorJSONOutput 0.19
186 TestMainNoArgs 0.04
187 TestMinikubeProfile 89.34
190 TestMountStart/serial/StartWithMountFirst 29.09
191 TestMountStart/serial/VerifyMountFirst 0.36
192 TestMountStart/serial/StartWithMountSecond 29.02
193 TestMountStart/serial/VerifyMountSecond 0.36
194 TestMountStart/serial/DeleteFirst 0.87
195 TestMountStart/serial/VerifyMountPostDelete 0.36
196 TestMountStart/serial/Stop 1.29
197 TestMountStart/serial/RestartStopped 21.96
198 TestMountStart/serial/VerifyMountPostStop 0.36
201 TestMultiNode/serial/FreshStart2Nodes 112.25
202 TestMultiNode/serial/DeployApp2Nodes 5.52
203 TestMultiNode/serial/PingHostFrom2Pods 0.77
204 TestMultiNode/serial/AddNode 50.82
205 TestMultiNode/serial/MultiNodeLabels 0.06
206 TestMultiNode/serial/ProfileList 0.56
207 TestMultiNode/serial/CopyFile 6.96
208 TestMultiNode/serial/StopNode 2.3
209 TestMultiNode/serial/StartAfterStop 38.43
211 TestMultiNode/serial/DeleteNode 2.12
213 TestMultiNode/serial/RestartMultiNode 193.21
214 TestMultiNode/serial/ValidateNameConflict 43.02
221 TestScheduledStopUnix 115.4
225 TestRunningBinaryUpgrade 242.93
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
231 TestNoKubernetes/serial/StartWithK8s 95.6
250 TestNoKubernetes/serial/StartWithStopK8s 42.2
251 TestNoKubernetes/serial/Start 47.91
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
253 TestNoKubernetes/serial/ProfileList 1.97
254 TestNoKubernetes/serial/Stop 1.29
255 TestNoKubernetes/serial/StartNoArgs 44.54
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
258 TestPause/serial/Start 79.9
259 TestStoppedBinaryUpgrade/Setup 2.61
260 TestStoppedBinaryUpgrade/Upgrade 136.66
263 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
x
+
TestDownloadOnly/v1.20.0/json-events (35.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-289933 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-289933 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (35.692086972s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (35.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0919 18:39:41.023802   15116 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0919 18:39:41.023900   15116 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-289933
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-289933: exit status 85 (135.923211ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-289933 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |          |
	|         | -p download-only-289933        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:05.367058   15128 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:05.367338   15128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:05.367349   15128 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:05.367356   15128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:05.367544   15128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	W0919 18:39:05.367701   15128 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19664-7917/.minikube/config/config.json: open /home/jenkins/minikube-integration/19664-7917/.minikube/config/config.json: no such file or directory
	I0919 18:39:05.368278   15128 out.go:352] Setting JSON to true
	I0919 18:39:05.369188   15128 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1289,"bootTime":1726769856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:39:05.369278   15128 start.go:139] virtualization: kvm guest
	I0919 18:39:05.371535   15128 out.go:97] [download-only-289933] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0919 18:39:05.371646   15128 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 18:39:05.371707   15128 notify.go:220] Checking for updates...
	I0919 18:39:05.373107   15128 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:39:05.374325   15128 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:05.375508   15128 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 18:39:05.376660   15128 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 18:39:05.377682   15128 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 18:39:05.379645   15128 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 18:39:05.379936   15128 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:39:05.477864   15128 out.go:97] Using the kvm2 driver based on user configuration
	I0919 18:39:05.477892   15128 start.go:297] selected driver: kvm2
	I0919 18:39:05.477899   15128 start.go:901] validating driver "kvm2" against <nil>
	I0919 18:39:05.478211   15128 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:39:05.478330   15128 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 18:39:05.492776   15128 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 18:39:05.492825   15128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:05.493406   15128 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0919 18:39:05.493576   15128 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 18:39:05.493610   15128 cni.go:84] Creating CNI manager for ""
	I0919 18:39:05.493665   15128 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:39:05.493676   15128 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:39:05.493747   15128 start.go:340] cluster config:
	{Name:download-only-289933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-289933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:05.493935   15128 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:39:05.495475   15128 out.go:97] Downloading VM boot image ...
	I0919 18:39:05.495513   15128 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0919 18:39:21.228124   15128 out.go:97] Starting "download-only-289933" primary control-plane node in "download-only-289933" cluster
	I0919 18:39:21.228157   15128 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0919 18:39:21.338159   15128 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0919 18:39:21.338187   15128 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:21.338332   15128 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0919 18:39:21.340403   15128 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0919 18:39:21.340421   15128 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 18:39:21.454827   15128 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0919 18:39:38.927480   15128 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 18:39:38.927572   15128 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 18:39:39.834308   15128 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0919 18:39:39.834680   15128 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/download-only-289933/config.json ...
	I0919 18:39:39.834717   15128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/download-only-289933/config.json: {Name:mkae4faa5ccd5d45e17d608f0c2921b931dc3096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:39.834899   15128 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0919 18:39:39.835107   15128 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-289933 host does not exist
	  To start a cluster, run: "minikube start -p download-only-289933"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-289933
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (20.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-001940 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-001940 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (20.396985308s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (20.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0919 18:40:01.810314   15116 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0919 18:40:01.810353   15116 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-001940
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-001940: exit status 85 (57.952276ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-289933 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p download-only-289933        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-289933        | download-only-289933 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | -o=json --download-only        | download-only-001940 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p download-only-001940        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:41.449445   15418 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:41.449679   15418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:41.449687   15418 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:41.449692   15418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:41.449881   15418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 18:39:41.450421   15418 out.go:352] Setting JSON to true
	I0919 18:39:41.451205   15418 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1325,"bootTime":1726769856,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:39:41.451308   15418 start.go:139] virtualization: kvm guest
	I0919 18:39:41.453512   15418 out.go:97] [download-only-001940] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:39:41.453619   15418 notify.go:220] Checking for updates...
	I0919 18:39:41.455154   15418 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:39:41.456569   15418 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:41.457894   15418 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 18:39:41.459193   15418 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 18:39:41.460481   15418 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 18:39:41.462905   15418 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 18:39:41.463112   15418 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:39:41.495067   15418 out.go:97] Using the kvm2 driver based on user configuration
	I0919 18:39:41.495089   15418 start.go:297] selected driver: kvm2
	I0919 18:39:41.495094   15418 start.go:901] validating driver "kvm2" against <nil>
	I0919 18:39:41.495420   15418 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:39:41.495500   15418 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19664-7917/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 18:39:41.510350   15418 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0919 18:39:41.510399   15418 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:41.510935   15418 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0919 18:39:41.511079   15418 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 18:39:41.511135   15418 cni.go:84] Creating CNI manager for ""
	I0919 18:39:41.511183   15418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:39:41.511191   15418 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:39:41.511243   15418 start.go:340] cluster config:
	{Name:download-only-001940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-001940 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:41.511336   15418 iso.go:125] acquiring lock: {Name:mk147228b9694726fa32ddf9a7c3cfd0fd29624a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:39:41.513162   15418 out.go:97] Starting "download-only-001940" primary control-plane node in "download-only-001940" cluster
	I0919 18:39:41.513181   15418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:41.620888   15418 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 18:39:41.620931   15418 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:41.621198   15418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:41.623069   15418 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0919 18:39:41.623094   15418 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0919 18:39:41.736218   15418 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 18:39:59.944899   15418 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0919 18:39:59.944991   15418 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19664-7917/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0919 18:40:00.683344   15418 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 18:40:00.683670   15418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/download-only-001940/config.json ...
	I0919 18:40:00.683699   15418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/download-only-001940/config.json: {Name:mk318554e57cf0d1ac136a6bb29341c3767c2991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:00.683868   15418 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:00.684043   15418 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19664-7917/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-001940 host does not exist
	  To start a cluster, run: "minikube start -p download-only-001940"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-001940
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 18:40:02.385881   15116 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-446237 --alsologtostderr --binary-mirror http://127.0.0.1:42349 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-446237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-446237
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (115.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-011213 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-011213 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m54.229901183s)
helpers_test.go:175: Cleaning up "offline-crio-011213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-011213
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-011213: (1.015821515s)
--- PASS: TestOffline (115.25s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-140799
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-140799: exit status 85 (48.648782ms)

                                                
                                                
-- stdout --
	* Profile "addons-140799" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-140799"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-140799
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-140799: exit status 85 (50.813982ms)

                                                
                                                
-- stdout --
	* Profile "addons-140799" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-140799"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (45.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-065795 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-065795 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.219711533s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-065795 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-065795 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-065795 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-065795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-065795
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-065795: (1.013280378s)
--- PASS: TestCertOptions (45.66s)

                                                
                                    
x
+
TestCertExpiration (304.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-478436 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-478436 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m25.858860832s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-478436 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-478436 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.083630803s)
helpers_test.go:175: Cleaning up "cert-expiration-478436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-478436
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-478436: (1.005373496s)
--- PASS: TestCertExpiration (304.95s)

                                                
                                    
x
+
TestForceSystemdFlag (65.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-013710 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-013710 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.603722305s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-013710 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-013710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-013710
--- PASS: TestForceSystemdFlag (65.56s)

                                                
                                    
x
+
TestForceSystemdEnv (46.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-108301 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-108301 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.849601394s)
helpers_test.go:175: Cleaning up "force-systemd-env-108301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-108301
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-108301: (1.249128289s)
--- PASS: TestForceSystemdEnv (46.10s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.52s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0919 20:20:43.934246   15116 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 20:20:43.934399   15116 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0919 20:20:43.962107   15116 install.go:62] docker-machine-driver-kvm2: exit status 1
W0919 20:20:43.962433   15116 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0919 20:20:43.962489   15116 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2984790630/001/docker-machine-driver-kvm2
I0919 20:20:44.197126   15116 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2984790630/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc00052e080 gz:0xc00052e088 tar:0xc0003acf20 tar.bz2:0xc0003acf30 tar.gz:0xc00052e030 tar.xz:0xc00052e060 tar.zst:0xc00052e070 tbz2:0xc0003acf30 tgz:0xc00052e030 txz:0xc00052e060 tzst:0xc00052e070 xz:0xc00052e090 zip:0xc00052e0a0 zst:0xc00052e098] Getters:map[file:0xc001b536d0 http:0xc000903310 https:0xc000903360] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0919 20:20:44.197173   15116 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2984790630/001/docker-machine-driver-kvm2
I0919 20:20:46.940526   15116 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 20:20:46.940634   15116 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 20:20:46.971885   15116 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0919 20:20:46.971921   15116 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0919 20:20:46.972016   15116 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0919 20:20:46.972052   15116 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2984790630/002/docker-machine-driver-kvm2
I0919 20:20:47.029719   15116 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2984790630/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc00052e080 gz:0xc00052e088 tar:0xc0003acf20 tar.bz2:0xc0003acf30 tar.gz:0xc00052e030 tar.xz:0xc00052e060 tar.zst:0xc00052e070 tbz2:0xc0003acf30 tgz:0xc00052e030 txz:0xc00052e060 tzst:0xc00052e070 xz:0xc00052e090 zip:0xc00052e0a0 zst:0xc00052e098] Getters:map[file:0xc001917f50 http:0xc00054b4a0 https:0xc00054b4f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0919 20:20:47.029786   15116 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2984790630/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.52s)

                                                
                                    
x
+
TestErrorSpam/setup (42.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-925882 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-925882 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-925882 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-925882 --driver=kvm2  --container-runtime=crio: (42.437934203s)
--- PASS: TestErrorSpam/setup (42.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (4.89s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 stop: (2.308017793s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 stop: (1.337883231s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-925882 --log_dir /tmp/nospam-925882 stop: (1.239040645s)
--- PASS: TestErrorSpam/stop (4.89s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19664-7917/.minikube/files/etc/test/nested/copy/15116/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454067 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-454067 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m20.161558661s)
--- PASS: TestFunctional/serial/StartWithProxy (80.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (51.62s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 19:22:16.613052   15116 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454067 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-454067 --alsologtostderr -v=8: (51.617394469s)
functional_test.go:663: soft start took 51.618103111s for "functional-454067" cluster.
I0919 19:23:08.230744   15116 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (51.62s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-454067 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-454067 cache add registry.k8s.io/pause:3.1: (1.139338101s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-454067 cache add registry.k8s.io/pause:3.3: (1.211915942s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-454067 cache add registry.k8s.io/pause:latest: (1.1464672s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-454067 /tmp/TestFunctionalserialCacheCmdcacheadd_local2777944368/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 cache add minikube-local-cache-test:functional-454067
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-454067 cache add minikube-local-cache-test:functional-454067: (1.941918493s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 cache delete minikube-local-cache-test:functional-454067
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-454067
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454067 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.010081ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-454067 cache reload: (1.014598104s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 kubectl -- --context functional-454067 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-454067 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454067 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-454067 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.950912993s)
functional_test.go:761: restart took 34.951015866s for "functional-454067" cluster.
I0919 19:23:51.398986   15116 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-454067 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-454067 logs: (1.45547673s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 logs --file /tmp/TestFunctionalserialLogsFileCmd75804135/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-454067 logs --file /tmp/TestFunctionalserialLogsFileCmd75804135/001/logs.txt: (1.433476008s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-454067 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-454067
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-454067: exit status 115 (265.605999ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.75:32683 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-454067 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-454067 delete -f testdata/invalidsvc.yaml: (1.018561352s)
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454067 config get cpus: exit status 14 (50.984868ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454067 config get cpus: exit status 14 (45.926852ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-454067 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-454067 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28841: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454067 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-454067 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (150.370548ms)

                                                
                                                
-- stdout --
	* [functional-454067] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:24:11.431838   28241 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:24:11.431952   28241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:11.431960   28241 out.go:358] Setting ErrFile to fd 2...
	I0919 19:24:11.431965   28241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:11.432146   28241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:24:11.432657   28241 out.go:352] Setting JSON to false
	I0919 19:24:11.433538   28241 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3995,"bootTime":1726769856,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:24:11.433626   28241 start.go:139] virtualization: kvm guest
	I0919 19:24:11.435884   28241 out.go:177] * [functional-454067] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:24:11.437178   28241 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:24:11.437194   28241 notify.go:220] Checking for updates...
	I0919 19:24:11.439751   28241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:24:11.441102   28241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:24:11.442299   28241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:11.443517   28241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:24:11.444617   28241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:24:11.446087   28241 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:24:11.446501   28241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:24:11.446581   28241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:24:11.463659   28241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35893
	I0919 19:24:11.464139   28241 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:24:11.464786   28241 main.go:141] libmachine: Using API Version  1
	I0919 19:24:11.464811   28241 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:24:11.465310   28241 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:24:11.465513   28241 main.go:141] libmachine: (functional-454067) Calling .DriverName
	I0919 19:24:11.465773   28241 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:24:11.466201   28241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:24:11.466243   28241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:24:11.482768   28241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0919 19:24:11.483225   28241 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:24:11.483717   28241 main.go:141] libmachine: Using API Version  1
	I0919 19:24:11.483740   28241 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:24:11.484253   28241 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:24:11.484444   28241 main.go:141] libmachine: (functional-454067) Calling .DriverName
	I0919 19:24:11.525801   28241 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 19:24:11.527208   28241 start.go:297] selected driver: kvm2
	I0919 19:24:11.527225   28241 start.go:901] validating driver "kvm2" against &{Name:functional-454067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-454067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:24:11.527354   28241 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:24:11.529545   28241 out.go:201] 
	W0919 19:24:11.530961   28241 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 19:24:11.532465   28241 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454067 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454067 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-454067 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.027828ms)

                                                
                                                
-- stdout --
	* [functional-454067] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:24:11.289892   28200 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:24:11.290028   28200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:11.290040   28200 out.go:358] Setting ErrFile to fd 2...
	I0919 19:24:11.290046   28200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:11.290321   28200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 19:24:11.290957   28200 out.go:352] Setting JSON to false
	I0919 19:24:11.292085   28200 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3995,"bootTime":1726769856,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:24:11.292211   28200 start.go:139] virtualization: kvm guest
	I0919 19:24:11.294557   28200 out.go:177] * [functional-454067] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0919 19:24:11.295888   28200 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:24:11.295898   28200 notify.go:220] Checking for updates...
	I0919 19:24:11.298883   28200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:24:11.300142   28200 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	I0919 19:24:11.301491   28200 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	I0919 19:24:11.302787   28200 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:24:11.304103   28200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:24:11.306203   28200 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:24:11.306800   28200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:24:11.306860   28200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:24:11.323194   28200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I0919 19:24:11.323682   28200 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:24:11.324185   28200 main.go:141] libmachine: Using API Version  1
	I0919 19:24:11.324209   28200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:24:11.324565   28200 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:24:11.324760   28200 main.go:141] libmachine: (functional-454067) Calling .DriverName
	I0919 19:24:11.324989   28200 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:24:11.325310   28200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 19:24:11.325350   28200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 19:24:11.340639   28200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43457
	I0919 19:24:11.341102   28200 main.go:141] libmachine: () Calling .GetVersion
	I0919 19:24:11.341664   28200 main.go:141] libmachine: Using API Version  1
	I0919 19:24:11.341699   28200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 19:24:11.342085   28200 main.go:141] libmachine: () Calling .GetMachineName
	I0919 19:24:11.342245   28200 main.go:141] libmachine: (functional-454067) Calling .DriverName
	I0919 19:24:11.375118   28200 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0919 19:24:11.376461   28200 start.go:297] selected driver: kvm2
	I0919 19:24:11.376479   28200 start.go:901] validating driver "kvm2" against &{Name:functional-454067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-454067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:24:11.376615   28200 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:24:11.379044   28200 out.go:201] 
	W0919 19:24:11.380444   28200 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 19:24:11.381899   28200 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-454067 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-454067 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-wd79l" [1aa313eb-6dce-43cf-8e8b-d116b153690e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-wd79l" [1aa313eb-6dce-43cf-8e8b-d116b153690e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.005416541s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.75:31323
functional_test.go:1675: http://192.168.39.75:31323: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-wd79l

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.75:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.75:31323
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a8a54f82-dbb1-4477-bdf5-508cc4025867] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00473568s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-454067 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-454067 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-454067 get pvc myclaim -o=json
I0919 19:24:05.569052   15116 retry.go:31] will retry after 1.44038897s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:6ec0cf26-5db4-47ec-b6bf-1cc65a670fc6 ResourceVersion:754 Generation:0 CreationTimestamp:2024-09-19 19:24:05 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b525a0 VolumeMode:0xc001b525b0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-454067 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-454067 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3d871a6e-eb1f-4bb5-9264-2c022a2ef179] Pending
helpers_test.go:344: "sp-pod" [3d871a6e-eb1f-4bb5-9264-2c022a2ef179] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3d871a6e-eb1f-4bb5-9264-2c022a2ef179] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.223636957s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-454067 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-454067 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-454067 delete -f testdata/storage-provisioner/pod.yaml: (1.689796036s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-454067 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4aa0fe31-3891-4bb7-bc24-bfffd870354a] Pending
helpers_test.go:344: "sp-pod" [4aa0fe31-3891-4bb7-bc24-bfffd870354a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4aa0fe31-3891-4bb7-bc24-bfffd870354a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004045241s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-454067 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh -n functional-454067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 cp functional-454067:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3051108216/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh -n functional-454067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh -n functional-454067 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (33.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-454067 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-9wxwj" [e300fa2a-bda5-429d-82d8-a6affde063f2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-9wxwj" [e300fa2a-bda5-429d-82d8-a6affde063f2] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.003748051s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-454067 exec mysql-6cdb49bbb-9wxwj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-454067 exec mysql-6cdb49bbb-9wxwj -- mysql -ppassword -e "show databases;": exit status 1 (132.525755ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 19:24:47.329338   15116 retry.go:31] will retry after 686.917624ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-454067 exec mysql-6cdb49bbb-9wxwj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-454067 exec mysql-6cdb49bbb-9wxwj -- mysql -ppassword -e "show databases;": exit status 1 (131.899241ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 19:24:48.149358   15116 retry.go:31] will retry after 1.029957709s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-454067 exec mysql-6cdb49bbb-9wxwj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (33.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/15116/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo cat /etc/test/nested/copy/15116/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/15116.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo cat /etc/ssl/certs/15116.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/15116.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo cat /usr/share/ca-certificates/15116.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/151162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo cat /etc/ssl/certs/151162.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/151162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo cat /usr/share/ca-certificates/151162.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-454067 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454067 ssh "sudo systemctl is-active docker": exit status 1 (226.617559ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454067 ssh "sudo systemctl is-active containerd": exit status 1 (209.477423ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-454067 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-454067 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-kd44m" [e7cc8bd7-6267-4844-b9e8-ddb325d7304c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-kd44m" [e7cc8bd7-6267-4844-b9e8-ddb325d7304c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00439793s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "297.382691ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "46.411466ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "363.647133ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.325637ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdany-port3199865039/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726773841255261408" to /tmp/TestFunctionalparallelMountCmdany-port3199865039/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726773841255261408" to /tmp/TestFunctionalparallelMountCmdany-port3199865039/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726773841255261408" to /tmp/TestFunctionalparallelMountCmdany-port3199865039/001/test-1726773841255261408
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (228.640194ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:24:01.484248   15116 retry.go:31] will retry after 378.260293ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 19:24 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 19:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 19:24 test-1726773841255261408
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh cat /mount-9p/test-1726773841255261408
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-454067 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fb116934-a24d-4f5c-af70-26b7f94d7d14] Pending
helpers_test.go:344: "busybox-mount" [fb116934-a24d-4f5c-af70-26b7f94d7d14] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fb116934-a24d-4f5c-af70-26b7f94d7d14] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fb116934-a24d-4f5c-af70-26b7f94d7d14] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004473881s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-454067 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdany-port3199865039/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdspecific-port3738029741/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (259.307641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:24:10.958220   15116 retry.go:31] will retry after 492.998493ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdspecific-port3738029741/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454067 ssh "sudo umount -f /mount-9p": exit status 1 (266.580641ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-454067 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdspecific-port3738029741/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 service list -o json
functional_test.go:1494: Took "358.726646ms" to run "out/minikube-linux-amd64 -p functional-454067 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.75:32362
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.75:32362
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4112551748/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4112551748/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4112551748/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T" /mount1: exit status 1 (256.92242ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:24:12.933128   15116 retry.go:31] will retry after 352.486939ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-454067 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4112551748/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4112551748/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4112551748/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-454067 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-454067
localhost/kicbase/echo-server:functional-454067
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-454067 image ls --format short --alsologtostderr:
I0919 19:24:25.633716   29509 out.go:345] Setting OutFile to fd 1 ...
I0919 19:24:25.634000   29509 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:24:25.634010   29509 out.go:358] Setting ErrFile to fd 2...
I0919 19:24:25.634014   29509 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:24:25.634204   29509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
I0919 19:24:25.634779   29509 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:24:25.634893   29509 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:24:25.635251   29509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 19:24:25.635294   29509 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 19:24:25.649959   29509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38663
I0919 19:24:25.650444   29509 main.go:141] libmachine: () Calling .GetVersion
I0919 19:24:25.651070   29509 main.go:141] libmachine: Using API Version  1
I0919 19:24:25.651094   29509 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 19:24:25.651395   29509 main.go:141] libmachine: () Calling .GetMachineName
I0919 19:24:25.651604   29509 main.go:141] libmachine: (functional-454067) Calling .GetState
I0919 19:24:25.653494   29509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 19:24:25.653538   29509 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 19:24:25.667955   29509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
I0919 19:24:25.668468   29509 main.go:141] libmachine: () Calling .GetVersion
I0919 19:24:25.668928   29509 main.go:141] libmachine: Using API Version  1
I0919 19:24:25.668950   29509 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 19:24:25.669366   29509 main.go:141] libmachine: () Calling .GetMachineName
I0919 19:24:25.669553   29509 main.go:141] libmachine: (functional-454067) Calling .DriverName
I0919 19:24:25.669751   29509 ssh_runner.go:195] Run: systemctl --version
I0919 19:24:25.669774   29509 main.go:141] libmachine: (functional-454067) Calling .GetSSHHostname
I0919 19:24:25.672559   29509 main.go:141] libmachine: (functional-454067) DBG | domain functional-454067 has defined MAC address 52:54:00:68:cc:cb in network mk-functional-454067
I0919 19:24:25.672885   29509 main.go:141] libmachine: (functional-454067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:cc:cb", ip: ""} in network mk-functional-454067: {Iface:virbr1 ExpiryTime:2024-09-19 20:21:10 +0000 UTC Type:0 Mac:52:54:00:68:cc:cb Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-454067 Clientid:01:52:54:00:68:cc:cb}
I0919 19:24:25.672923   29509 main.go:141] libmachine: (functional-454067) DBG | domain functional-454067 has defined IP address 192.168.39.75 and MAC address 52:54:00:68:cc:cb in network mk-functional-454067
I0919 19:24:25.673035   29509 main.go:141] libmachine: (functional-454067) Calling .GetSSHPort
I0919 19:24:25.673197   29509 main.go:141] libmachine: (functional-454067) Calling .GetSSHKeyPath
I0919 19:24:25.673347   29509 main.go:141] libmachine: (functional-454067) Calling .GetSSHUsername
I0919 19:24:25.673462   29509 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/functional-454067/id_rsa Username:docker}
I0919 19:24:25.755357   29509 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 19:24:25.792434   29509 main.go:141] libmachine: Making call to close driver server
I0919 19:24:25.792461   29509 main.go:141] libmachine: (functional-454067) Calling .Close
I0919 19:24:25.792755   29509 main.go:141] libmachine: Successfully made call to close driver server
I0919 19:24:25.792775   29509 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 19:24:25.792798   29509 main.go:141] libmachine: Making call to close driver server
I0919 19:24:25.792798   29509 main.go:141] libmachine: (functional-454067) DBG | Closing plugin on server side
I0919 19:24:25.792809   29509 main.go:141] libmachine: (functional-454067) Calling .Close
I0919 19:24:25.793100   29509 main.go:141] libmachine: Successfully made call to close driver server
I0919 19:24:25.793135   29509 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 19:24:25.793149   29509 main.go:141] libmachine: (functional-454067) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-454067 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/kicbase/echo-server           | functional-454067  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-454067  | 5e838f807c4ee | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-454067  | eb36cf2b1059c | 3.33kB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-454067 image ls --format table --alsologtostderr:
I0919 19:24:29.958590   29700 out.go:345] Setting OutFile to fd 1 ...
I0919 19:24:29.958839   29700 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:24:29.958848   29700 out.go:358] Setting ErrFile to fd 2...
I0919 19:24:29.958853   29700 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:24:29.959045   29700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
I0919 19:24:29.959635   29700 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:24:29.959734   29700 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:24:29.960289   29700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 19:24:29.960415   29700 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 19:24:29.975705   29700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
I0919 19:24:29.976230   29700 main.go:141] libmachine: () Calling .GetVersion
I0919 19:24:29.976825   29700 main.go:141] libmachine: Using API Version  1
I0919 19:24:29.976850   29700 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 19:24:29.977214   29700 main.go:141] libmachine: () Calling .GetMachineName
I0919 19:24:29.977403   29700 main.go:141] libmachine: (functional-454067) Calling .GetState
I0919 19:24:29.979154   29700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 19:24:29.979204   29700 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 19:24:29.995461   29700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
I0919 19:24:29.995978   29700 main.go:141] libmachine: () Calling .GetVersion
I0919 19:24:29.996566   29700 main.go:141] libmachine: Using API Version  1
I0919 19:24:29.996598   29700 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 19:24:29.996979   29700 main.go:141] libmachine: () Calling .GetMachineName
I0919 19:24:29.997216   29700 main.go:141] libmachine: (functional-454067) Calling .DriverName
I0919 19:24:29.997410   29700 ssh_runner.go:195] Run: systemctl --version
I0919 19:24:29.997449   29700 main.go:141] libmachine: (functional-454067) Calling .GetSSHHostname
I0919 19:24:30.000581   29700 main.go:141] libmachine: (functional-454067) DBG | domain functional-454067 has defined MAC address 52:54:00:68:cc:cb in network mk-functional-454067
I0919 19:24:30.001012   29700 main.go:141] libmachine: (functional-454067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:cc:cb", ip: ""} in network mk-functional-454067: {Iface:virbr1 ExpiryTime:2024-09-19 20:21:10 +0000 UTC Type:0 Mac:52:54:00:68:cc:cb Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-454067 Clientid:01:52:54:00:68:cc:cb}
I0919 19:24:30.001044   29700 main.go:141] libmachine: (functional-454067) DBG | domain functional-454067 has defined IP address 192.168.39.75 and MAC address 52:54:00:68:cc:cb in network mk-functional-454067
I0919 19:24:30.001353   29700 main.go:141] libmachine: (functional-454067) Calling .GetSSHPort
I0919 19:24:30.001524   29700 main.go:141] libmachine: (functional-454067) Calling .GetSSHKeyPath
I0919 19:24:30.001700   29700 main.go:141] libmachine: (functional-454067) Calling .GetSSHUsername
I0919 19:24:30.001858   29700 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/functional-454067/id_rsa Username:docker}
I0919 19:24:30.136755   29700 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 19:24:30.431470   29700 main.go:141] libmachine: Making call to close driver server
I0919 19:24:30.431491   29700 main.go:141] libmachine: (functional-454067) Calling .Close
I0919 19:24:30.431780   29700 main.go:141] libmachine: (functional-454067) DBG | Closing plugin on server side
I0919 19:24:30.431826   29700 main.go:141] libmachine: Successfully made call to close driver server
I0919 19:24:30.431840   29700 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 19:24:30.431849   29700 main.go:141] libmachine: Making call to close driver server
I0919 19:24:30.431857   29700 main.go:141] libmachine: (functional-454067) Calling .Close
I0919 19:24:30.432075   29700 main.go:141] libmachine: Successfully made call to close driver server
I0919 19:24:30.432090   29700 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image ls --format json --alsologtostderr
2024/09/19 19:24:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-454067 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"eb36cf2b1059c74682a96e442cc7c87b160de5acf70012bfe044ed81df77a415","repoDigests":["localhost/minikube-local-cache-test@sha256:64f64dbbd69f98425a0c173759376fb7d4d81d23764392a71244de723b7f8f31"],"repoTags":["localhost/minikube-local-cache-test:functional-454067"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4
ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed64
7b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["regi
stry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube
-proxy:v1.31.1"],"size":"92733849"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"d5fb50008a673b78215089e9afb3d2d6957b383e42956cdd4256328b35692
6f6","repoDigests":["docker.io/library/68e437600cb4bc51e1dbb0a375eedf8e8a03c5d7b190fd6f97fa45eaf6996896-tmp@sha256:63b0c024e1cfd07265296604c49f2e71637ec629415c94de92155e1bf52c9bbb"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDige
sts":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-454067"],"size":"4943877"},{"id":"5e838f807c4eea95f9d335bff3183f494c3a9c18db047483ba44cc5a8504f58c","repoDigests":["localhost/my-image@sha256:a7fe878ad96df640cdfacdb95cc4405a6b12694c3c2b644407f827d2e226e258"],"repoTags":["localhost/my-image:functional-454067"],"size":"1468600"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":
"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-454067 image ls --format json --alsologtostderr:
I0919 19:24:29.735923   29676 out.go:345] Setting OutFile to fd 1 ...
I0919 19:24:29.736188   29676 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:24:29.736245   29676 out.go:358] Setting ErrFile to fd 2...
I0919 19:24:29.736263   29676 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:24:29.736752   29676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
I0919 19:24:29.737464   29676 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:24:29.737564   29676 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:24:29.737935   29676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 19:24:29.737971   29676 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 19:24:29.752237   29676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
I0919 19:24:29.752652   29676 main.go:141] libmachine: () Calling .GetVersion
I0919 19:24:29.753299   29676 main.go:141] libmachine: Using API Version  1
I0919 19:24:29.753326   29676 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 19:24:29.753700   29676 main.go:141] libmachine: () Calling .GetMachineName
I0919 19:24:29.753878   29676 main.go:141] libmachine: (functional-454067) Calling .GetState
I0919 19:24:29.755579   29676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 19:24:29.755614   29676 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 19:24:29.769747   29676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
I0919 19:24:29.770172   29676 main.go:141] libmachine: () Calling .GetVersion
I0919 19:24:29.770639   29676 main.go:141] libmachine: Using API Version  1
I0919 19:24:29.770659   29676 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 19:24:29.770970   29676 main.go:141] libmachine: () Calling .GetMachineName
I0919 19:24:29.771173   29676 main.go:141] libmachine: (functional-454067) Calling .DriverName
I0919 19:24:29.771375   29676 ssh_runner.go:195] Run: systemctl --version
I0919 19:24:29.771403   29676 main.go:141] libmachine: (functional-454067) Calling .GetSSHHostname
I0919 19:24:29.774645   29676 main.go:141] libmachine: (functional-454067) DBG | domain functional-454067 has defined MAC address 52:54:00:68:cc:cb in network mk-functional-454067
I0919 19:24:29.775104   29676 main.go:141] libmachine: (functional-454067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:cc:cb", ip: ""} in network mk-functional-454067: {Iface:virbr1 ExpiryTime:2024-09-19 20:21:10 +0000 UTC Type:0 Mac:52:54:00:68:cc:cb Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-454067 Clientid:01:52:54:00:68:cc:cb}
I0919 19:24:29.775154   29676 main.go:141] libmachine: (functional-454067) DBG | domain functional-454067 has defined IP address 192.168.39.75 and MAC address 52:54:00:68:cc:cb in network mk-functional-454067
I0919 19:24:29.775307   29676 main.go:141] libmachine: (functional-454067) Calling .GetSSHPort
I0919 19:24:29.775473   29676 main.go:141] libmachine: (functional-454067) Calling .GetSSHKeyPath
I0919 19:24:29.775626   29676 main.go:141] libmachine: (functional-454067) Calling .GetSSHUsername
I0919 19:24:29.775816   29676 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/functional-454067/id_rsa Username:docker}
I0919 19:24:29.864852   29676 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 19:24:29.910156   29676 main.go:141] libmachine: Making call to close driver server
I0919 19:24:29.910172   29676 main.go:141] libmachine: (functional-454067) Calling .Close
I0919 19:24:29.910460   29676 main.go:141] libmachine: Successfully made call to close driver server
I0919 19:24:29.910481   29676 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 19:24:29.910497   29676 main.go:141] libmachine: Making call to close driver server
I0919 19:24:29.910480   29676 main.go:141] libmachine: (functional-454067) DBG | Closing plugin on server side
I0919 19:24:29.910510   29676 main.go:141] libmachine: (functional-454067) Calling .Close
I0919 19:24:29.910718   29676 main.go:141] libmachine: Successfully made call to close driver server
I0919 19:24:29.910750   29676 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-454067 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: eb36cf2b1059c74682a96e442cc7c87b160de5acf70012bfe044ed81df77a415
repoDigests:
- localhost/minikube-local-cache-test@sha256:64f64dbbd69f98425a0c173759376fb7d4d81d23764392a71244de723b7f8f31
repoTags:
- localhost/minikube-local-cache-test:functional-454067
size: "3330"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-454067
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-454067 image ls --format yaml --alsologtostderr:
I0919 19:24:25.838560   29533 out.go:345] Setting OutFile to fd 1 ...
I0919 19:24:25.838660   29533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:24:25.838671   29533 out.go:358] Setting ErrFile to fd 2...
I0919 19:24:25.838675   29533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:24:25.838871   29533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
I0919 19:24:25.839647   29533 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:24:25.839787   29533 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:24:25.840315   29533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 19:24:25.840362   29533 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 19:24:25.855516   29533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45503
I0919 19:24:25.855932   29533 main.go:141] libmachine: () Calling .GetVersion
I0919 19:24:25.856512   29533 main.go:141] libmachine: Using API Version  1
I0919 19:24:25.856535   29533 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 19:24:25.856839   29533 main.go:141] libmachine: () Calling .GetMachineName
I0919 19:24:25.857020   29533 main.go:141] libmachine: (functional-454067) Calling .GetState
I0919 19:24:25.858722   29533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 19:24:25.858764   29533 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 19:24:25.873435   29533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34367
I0919 19:24:25.873859   29533 main.go:141] libmachine: () Calling .GetVersion
I0919 19:24:25.874391   29533 main.go:141] libmachine: Using API Version  1
I0919 19:24:25.874418   29533 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 19:24:25.874721   29533 main.go:141] libmachine: () Calling .GetMachineName
I0919 19:24:25.874897   29533 main.go:141] libmachine: (functional-454067) Calling .DriverName
I0919 19:24:25.875076   29533 ssh_runner.go:195] Run: systemctl --version
I0919 19:24:25.875098   29533 main.go:141] libmachine: (functional-454067) Calling .GetSSHHostname
I0919 19:24:25.877856   29533 main.go:141] libmachine: (functional-454067) DBG | domain functional-454067 has defined MAC address 52:54:00:68:cc:cb in network mk-functional-454067
I0919 19:24:25.878196   29533 main.go:141] libmachine: (functional-454067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:cc:cb", ip: ""} in network mk-functional-454067: {Iface:virbr1 ExpiryTime:2024-09-19 20:21:10 +0000 UTC Type:0 Mac:52:54:00:68:cc:cb Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-454067 Clientid:01:52:54:00:68:cc:cb}
I0919 19:24:25.878212   29533 main.go:141] libmachine: (functional-454067) DBG | domain functional-454067 has defined IP address 192.168.39.75 and MAC address 52:54:00:68:cc:cb in network mk-functional-454067
I0919 19:24:25.878345   29533 main.go:141] libmachine: (functional-454067) Calling .GetSSHPort
I0919 19:24:25.878487   29533 main.go:141] libmachine: (functional-454067) Calling .GetSSHKeyPath
I0919 19:24:25.878615   29533 main.go:141] libmachine: (functional-454067) Calling .GetSSHUsername
I0919 19:24:25.878736   29533 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/functional-454067/id_rsa Username:docker}
I0919 19:24:25.961768   29533 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 19:24:26.002880   29533 main.go:141] libmachine: Making call to close driver server
I0919 19:24:26.002892   29533 main.go:141] libmachine: (functional-454067) Calling .Close
I0919 19:24:26.003198   29533 main.go:141] libmachine: (functional-454067) DBG | Closing plugin on server side
I0919 19:24:26.003232   29533 main.go:141] libmachine: Successfully made call to close driver server
I0919 19:24:26.003246   29533 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 19:24:26.003264   29533 main.go:141] libmachine: Making call to close driver server
I0919 19:24:26.003276   29533 main.go:141] libmachine: (functional-454067) Calling .Close
I0919 19:24:26.003489   29533 main.go:141] libmachine: Successfully made call to close driver server
I0919 19:24:26.003502   29533 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454067 ssh pgrep buildkitd: exit status 1 (185.721691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image build -t localhost/my-image:functional-454067 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-454067 image build -t localhost/my-image:functional-454067 testdata/build --alsologtostderr: (3.292274959s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-454067 image build -t localhost/my-image:functional-454067 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d5fb50008a6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-454067
--> 5e838f807c4
Successfully tagged localhost/my-image:functional-454067
5e838f807c4eea95f9d335bff3183f494c3a9c18db047483ba44cc5a8504f58c
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-454067 image build -t localhost/my-image:functional-454067 testdata/build --alsologtostderr:
I0919 19:24:26.235679   29587 out.go:345] Setting OutFile to fd 1 ...
I0919 19:24:26.235810   29587 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:24:26.235819   29587 out.go:358] Setting ErrFile to fd 2...
I0919 19:24:26.235823   29587 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:24:26.235992   29587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
I0919 19:24:26.236569   29587 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:24:26.237048   29587 config.go:182] Loaded profile config "functional-454067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:24:26.237470   29587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 19:24:26.237515   29587 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 19:24:26.251953   29587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
I0919 19:24:26.252534   29587 main.go:141] libmachine: () Calling .GetVersion
I0919 19:24:26.253055   29587 main.go:141] libmachine: Using API Version  1
I0919 19:24:26.253086   29587 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 19:24:26.253437   29587 main.go:141] libmachine: () Calling .GetMachineName
I0919 19:24:26.253631   29587 main.go:141] libmachine: (functional-454067) Calling .GetState
I0919 19:24:26.255393   29587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 19:24:26.255425   29587 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 19:24:26.269696   29587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
I0919 19:24:26.270127   29587 main.go:141] libmachine: () Calling .GetVersion
I0919 19:24:26.270607   29587 main.go:141] libmachine: Using API Version  1
I0919 19:24:26.270631   29587 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 19:24:26.270997   29587 main.go:141] libmachine: () Calling .GetMachineName
I0919 19:24:26.271198   29587 main.go:141] libmachine: (functional-454067) Calling .DriverName
I0919 19:24:26.271441   29587 ssh_runner.go:195] Run: systemctl --version
I0919 19:24:26.271470   29587 main.go:141] libmachine: (functional-454067) Calling .GetSSHHostname
I0919 19:24:26.274379   29587 main.go:141] libmachine: (functional-454067) DBG | domain functional-454067 has defined MAC address 52:54:00:68:cc:cb in network mk-functional-454067
I0919 19:24:26.274768   29587 main.go:141] libmachine: (functional-454067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:cc:cb", ip: ""} in network mk-functional-454067: {Iface:virbr1 ExpiryTime:2024-09-19 20:21:10 +0000 UTC Type:0 Mac:52:54:00:68:cc:cb Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:functional-454067 Clientid:01:52:54:00:68:cc:cb}
I0919 19:24:26.274792   29587 main.go:141] libmachine: (functional-454067) DBG | domain functional-454067 has defined IP address 192.168.39.75 and MAC address 52:54:00:68:cc:cb in network mk-functional-454067
I0919 19:24:26.274968   29587 main.go:141] libmachine: (functional-454067) Calling .GetSSHPort
I0919 19:24:26.275179   29587 main.go:141] libmachine: (functional-454067) Calling .GetSSHKeyPath
I0919 19:24:26.275348   29587 main.go:141] libmachine: (functional-454067) Calling .GetSSHUsername
I0919 19:24:26.275529   29587 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/functional-454067/id_rsa Username:docker}
I0919 19:24:26.368255   29587 build_images.go:161] Building image from path: /tmp/build.145184918.tar
I0919 19:24:26.368326   29587 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 19:24:26.378716   29587 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.145184918.tar
I0919 19:24:26.387485   29587 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.145184918.tar: stat -c "%s %y" /var/lib/minikube/build/build.145184918.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.145184918.tar': No such file or directory
I0919 19:24:26.387527   29587 ssh_runner.go:362] scp /tmp/build.145184918.tar --> /var/lib/minikube/build/build.145184918.tar (3072 bytes)
I0919 19:24:26.439632   29587 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.145184918
I0919 19:24:26.450985   29587 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.145184918 -xf /var/lib/minikube/build/build.145184918.tar
I0919 19:24:26.466284   29587 crio.go:315] Building image: /var/lib/minikube/build/build.145184918
I0919 19:24:26.466359   29587 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-454067 /var/lib/minikube/build/build.145184918 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0919 19:24:29.457724   29587 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-454067 /var/lib/minikube/build/build.145184918 --cgroup-manager=cgroupfs: (2.991338858s)
I0919 19:24:29.457794   29587 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.145184918
I0919 19:24:29.469636   29587 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.145184918.tar
I0919 19:24:29.481889   29587 build_images.go:217] Built localhost/my-image:functional-454067 from /tmp/build.145184918.tar
I0919 19:24:29.481920   29587 build_images.go:133] succeeded building to: functional-454067
I0919 19:24:29.481925   29587 build_images.go:134] failed building to: 
I0919 19:24:29.481948   29587 main.go:141] libmachine: Making call to close driver server
I0919 19:24:29.481961   29587 main.go:141] libmachine: (functional-454067) Calling .Close
I0919 19:24:29.482212   29587 main.go:141] libmachine: (functional-454067) DBG | Closing plugin on server side
I0919 19:24:29.482235   29587 main.go:141] libmachine: Successfully made call to close driver server
I0919 19:24:29.482247   29587 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 19:24:29.482262   29587 main.go:141] libmachine: Making call to close driver server
I0919 19:24:29.482273   29587 main.go:141] libmachine: (functional-454067) Calling .Close
I0919 19:24:29.482457   29587 main.go:141] libmachine: Successfully made call to close driver server
I0919 19:24:29.482498   29587 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 19:24:29.482498   29587 main.go:141] libmachine: (functional-454067) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.238029134s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-454067
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image load --daemon kicbase/echo-server:functional-454067 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-454067 image load --daemon kicbase/echo-server:functional-454067 --alsologtostderr: (1.822953607s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image load --daemon kicbase/echo-server:functional-454067 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-454067
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image load --daemon kicbase/echo-server:functional-454067 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image save kicbase/echo-server:functional-454067 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image rm kicbase/echo-server:functional-454067 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-454067 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.652933406s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-454067
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-454067 image save --daemon kicbase/echo-server:functional-454067 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-454067
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-454067
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-454067
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-454067
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-076992 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-076992 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.72444181s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-076992 -- rollout status deployment/busybox: (4.924406877s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-8wfb7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-c64rv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-jl6lr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-8wfb7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-c64rv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-jl6lr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-8wfb7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-c64rv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-jl6lr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-8wfb7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-8wfb7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-c64rv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-c64rv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-jl6lr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076992 -- exec busybox-7dff88458-jl6lr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-076992 -v=7 --alsologtostderr
E0919 19:28:59.335052   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:28:59.341501   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:28:59.352844   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:28:59.374263   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:28:59.415744   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:28:59.497216   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:28:59.659386   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:28:59.981204   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:29:00.623024   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:29:01.905197   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:29:04.466878   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:29:09.588520   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-076992 -v=7 --alsologtostderr: (55.835133029s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-076992 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp testdata/cp-test.txt ha-076992:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992:/home/docker/cp-test.txt ha-076992-m02:/home/docker/cp-test_ha-076992_ha-076992-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m02 "sudo cat /home/docker/cp-test_ha-076992_ha-076992-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992:/home/docker/cp-test.txt ha-076992-m03:/home/docker/cp-test_ha-076992_ha-076992-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m03 "sudo cat /home/docker/cp-test_ha-076992_ha-076992-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992:/home/docker/cp-test.txt ha-076992-m04:/home/docker/cp-test_ha-076992_ha-076992-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m04 "sudo cat /home/docker/cp-test_ha-076992_ha-076992-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp testdata/cp-test.txt ha-076992-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m02:/home/docker/cp-test.txt ha-076992:/home/docker/cp-test_ha-076992-m02_ha-076992.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992 "sudo cat /home/docker/cp-test_ha-076992-m02_ha-076992.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m02:/home/docker/cp-test.txt ha-076992-m03:/home/docker/cp-test_ha-076992-m02_ha-076992-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m03 "sudo cat /home/docker/cp-test_ha-076992-m02_ha-076992-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m02:/home/docker/cp-test.txt ha-076992-m04:/home/docker/cp-test_ha-076992-m02_ha-076992-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m04 "sudo cat /home/docker/cp-test_ha-076992-m02_ha-076992-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp testdata/cp-test.txt ha-076992-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt ha-076992:/home/docker/cp-test_ha-076992-m03_ha-076992.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992 "sudo cat /home/docker/cp-test_ha-076992-m03_ha-076992.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt ha-076992-m02:/home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt
E0919 19:29:19.829790   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m02 "sudo cat /home/docker/cp-test_ha-076992-m03_ha-076992-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m03:/home/docker/cp-test.txt ha-076992-m04:/home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m04 "sudo cat /home/docker/cp-test_ha-076992-m03_ha-076992-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp testdata/cp-test.txt ha-076992-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3267558097/001/cp-test_ha-076992-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt ha-076992:/home/docker/cp-test_ha-076992-m04_ha-076992.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992 "sudo cat /home/docker/cp-test_ha-076992-m04_ha-076992.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt ha-076992-m02:/home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m02 "sudo cat /home/docker/cp-test_ha-076992-m04_ha-076992-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 cp ha-076992-m04:/home/docker/cp-test.txt ha-076992-m03:/home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 ssh -n ha-076992-m03 "sudo cat /home/docker/cp-test_ha-076992-m04_ha-076992-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.153601271s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-076992 node delete m03 -v=7 --alsologtostderr: (15.940079348s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-076992 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-971896 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-971896 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.894394911s)
--- PASS: TestJSONOutput/start/Command (84.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-971896 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-971896 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-971896 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-971896 --output=json --user=testUser: (7.355935932s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-994386 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-994386 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.519713ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"02a332b6-868d-4518-bdc3-3b2f565e7f1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-994386] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"02ad2a9e-87c5-4810-8808-f430008121c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"c0e2185f-72ec-4e6e-9485-a82165c2d9cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d7625ca0-99cb-4e8f-a64b-b821f74630bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig"}}
	{"specversion":"1.0","id":"6b061968-9b83-44eb-9118-0248968b9656","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube"}}
	{"specversion":"1.0","id":"a8cb2b0c-a7c3-4957-9464-96654f6ab446","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6f632e9d-c4b2-49bd-be7d-738bfe655df0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"080afe22-b37e-46fb-80ab-e4dd530ff79e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-994386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-994386
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-448459 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-448459 --driver=kvm2  --container-runtime=crio: (43.997909787s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-461094 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-461094 --driver=kvm2  --container-runtime=crio: (42.748822494s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-448459
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-461094
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-461094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-461094
helpers_test.go:175: Cleaning up "first-448459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-448459
--- PASS: TestMinikubeProfile (89.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-359816 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-359816 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.088560024s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-359816 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-359816 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-375991 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-375991 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.023260237s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-375991 ssh -- ls /minikube-host
E0919 19:58:59.335455   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-375991 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-359816 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-375991 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-375991 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-375991
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-375991: (1.292793924s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-375991
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-375991: (20.962059512s)
--- PASS: TestMountStart/serial/RestartStopped (21.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-375991 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-375991 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-282812 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-282812 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.848303997s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-282812 -- rollout status deployment/busybox: (4.126144214s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- exec busybox-7dff88458-c68r8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- exec busybox-7dff88458-mmwbs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- exec busybox-7dff88458-c68r8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- exec busybox-7dff88458-mmwbs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- exec busybox-7dff88458-c68r8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- exec busybox-7dff88458-mmwbs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.52s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- exec busybox-7dff88458-c68r8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- exec busybox-7dff88458-c68r8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- exec busybox-7dff88458-mmwbs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-282812 -- exec busybox-7dff88458-mmwbs -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-282812 -v 3 --alsologtostderr
E0919 20:02:02.406364   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-282812 -v 3 --alsologtostderr: (50.265525477s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.82s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-282812 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp testdata/cp-test.txt multinode-282812:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp multinode-282812:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile472680244/001/cp-test_multinode-282812.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp multinode-282812:/home/docker/cp-test.txt multinode-282812-m02:/home/docker/cp-test_multinode-282812_multinode-282812-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m02 "sudo cat /home/docker/cp-test_multinode-282812_multinode-282812-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp multinode-282812:/home/docker/cp-test.txt multinode-282812-m03:/home/docker/cp-test_multinode-282812_multinode-282812-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m03 "sudo cat /home/docker/cp-test_multinode-282812_multinode-282812-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp testdata/cp-test.txt multinode-282812-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp multinode-282812-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile472680244/001/cp-test_multinode-282812-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp multinode-282812-m02:/home/docker/cp-test.txt multinode-282812:/home/docker/cp-test_multinode-282812-m02_multinode-282812.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812 "sudo cat /home/docker/cp-test_multinode-282812-m02_multinode-282812.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp multinode-282812-m02:/home/docker/cp-test.txt multinode-282812-m03:/home/docker/cp-test_multinode-282812-m02_multinode-282812-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m03 "sudo cat /home/docker/cp-test_multinode-282812-m02_multinode-282812-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp testdata/cp-test.txt multinode-282812-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp multinode-282812-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile472680244/001/cp-test_multinode-282812-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp multinode-282812-m03:/home/docker/cp-test.txt multinode-282812:/home/docker/cp-test_multinode-282812-m03_multinode-282812.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812 "sudo cat /home/docker/cp-test_multinode-282812-m03_multinode-282812.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 cp multinode-282812-m03:/home/docker/cp-test.txt multinode-282812-m02:/home/docker/cp-test_multinode-282812-m03_multinode-282812-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 ssh -n multinode-282812-m02 "sudo cat /home/docker/cp-test_multinode-282812-m03_multinode-282812-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-282812 node stop m03: (1.469816014s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-282812 status: exit status 7 (416.348697ms)

                                                
                                                
-- stdout --
	multinode-282812
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-282812-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-282812-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-282812 status --alsologtostderr: exit status 7 (408.955896ms)

                                                
                                                
-- stdout --
	multinode-282812
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-282812-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-282812-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 20:02:24.182647   47579 out.go:345] Setting OutFile to fd 1 ...
	I0919 20:02:24.182784   47579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:02:24.182794   47579 out.go:358] Setting ErrFile to fd 2...
	I0919 20:02:24.182798   47579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 20:02:24.182989   47579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-7917/.minikube/bin
	I0919 20:02:24.183204   47579 out.go:352] Setting JSON to false
	I0919 20:02:24.183235   47579 mustload.go:65] Loading cluster: multinode-282812
	I0919 20:02:24.183328   47579 notify.go:220] Checking for updates...
	I0919 20:02:24.183698   47579 config.go:182] Loaded profile config "multinode-282812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 20:02:24.183717   47579 status.go:174] checking status of multinode-282812 ...
	I0919 20:02:24.184184   47579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:02:24.184244   47579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:02:24.202634   47579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0919 20:02:24.203114   47579 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:02:24.203685   47579 main.go:141] libmachine: Using API Version  1
	I0919 20:02:24.203708   47579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:02:24.204088   47579 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:02:24.204288   47579 main.go:141] libmachine: (multinode-282812) Calling .GetState
	I0919 20:02:24.206041   47579 status.go:364] multinode-282812 host status = "Running" (err=<nil>)
	I0919 20:02:24.206061   47579 host.go:66] Checking if "multinode-282812" exists ...
	I0919 20:02:24.206462   47579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:02:24.206506   47579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:02:24.221941   47579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0919 20:02:24.222435   47579 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:02:24.222879   47579 main.go:141] libmachine: Using API Version  1
	I0919 20:02:24.222903   47579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:02:24.223261   47579 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:02:24.223430   47579 main.go:141] libmachine: (multinode-282812) Calling .GetIP
	I0919 20:02:24.226419   47579 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:02:24.226890   47579 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:02:24.226909   47579 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:02:24.227076   47579 host.go:66] Checking if "multinode-282812" exists ...
	I0919 20:02:24.227447   47579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:02:24.227494   47579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:02:24.242963   47579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0919 20:02:24.243445   47579 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:02:24.243896   47579 main.go:141] libmachine: Using API Version  1
	I0919 20:02:24.243914   47579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:02:24.244247   47579 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:02:24.244409   47579 main.go:141] libmachine: (multinode-282812) Calling .DriverName
	I0919 20:02:24.244608   47579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 20:02:24.244636   47579 main.go:141] libmachine: (multinode-282812) Calling .GetSSHHostname
	I0919 20:02:24.247566   47579 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:02:24.247962   47579 main.go:141] libmachine: (multinode-282812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:8a:89", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 20:59:39 +0000 UTC Type:0 Mac:52:54:00:98:8a:89 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-282812 Clientid:01:52:54:00:98:8a:89}
	I0919 20:02:24.247989   47579 main.go:141] libmachine: (multinode-282812) DBG | domain multinode-282812 has defined IP address 192.168.39.87 and MAC address 52:54:00:98:8a:89 in network mk-multinode-282812
	I0919 20:02:24.248131   47579 main.go:141] libmachine: (multinode-282812) Calling .GetSSHPort
	I0919 20:02:24.248272   47579 main.go:141] libmachine: (multinode-282812) Calling .GetSSHKeyPath
	I0919 20:02:24.248423   47579 main.go:141] libmachine: (multinode-282812) Calling .GetSSHUsername
	I0919 20:02:24.248601   47579 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/multinode-282812/id_rsa Username:docker}
	I0919 20:02:24.328750   47579 ssh_runner.go:195] Run: systemctl --version
	I0919 20:02:24.334576   47579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 20:02:24.348724   47579 kubeconfig.go:125] found "multinode-282812" server: "https://192.168.39.87:8443"
	I0919 20:02:24.348759   47579 api_server.go:166] Checking apiserver status ...
	I0919 20:02:24.348793   47579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 20:02:24.362374   47579 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1048/cgroup
	W0919 20:02:24.372053   47579 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1048/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 20:02:24.372097   47579 ssh_runner.go:195] Run: ls
	I0919 20:02:24.376670   47579 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0919 20:02:24.380770   47579 api_server.go:279] https://192.168.39.87:8443/healthz returned 200:
	ok
	I0919 20:02:24.380789   47579 status.go:456] multinode-282812 apiserver status = Running (err=<nil>)
	I0919 20:02:24.380798   47579 status.go:176] multinode-282812 status: &{Name:multinode-282812 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 20:02:24.380813   47579 status.go:174] checking status of multinode-282812-m02 ...
	I0919 20:02:24.381186   47579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:02:24.381223   47579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:02:24.396608   47579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46121
	I0919 20:02:24.397014   47579 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:02:24.397558   47579 main.go:141] libmachine: Using API Version  1
	I0919 20:02:24.397591   47579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:02:24.397935   47579 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:02:24.398128   47579 main.go:141] libmachine: (multinode-282812-m02) Calling .GetState
	I0919 20:02:24.399667   47579 status.go:364] multinode-282812-m02 host status = "Running" (err=<nil>)
	I0919 20:02:24.399683   47579 host.go:66] Checking if "multinode-282812-m02" exists ...
	I0919 20:02:24.400013   47579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:02:24.400048   47579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:02:24.414793   47579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38093
	I0919 20:02:24.415277   47579 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:02:24.415795   47579 main.go:141] libmachine: Using API Version  1
	I0919 20:02:24.415817   47579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:02:24.416142   47579 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:02:24.416335   47579 main.go:141] libmachine: (multinode-282812-m02) Calling .GetIP
	I0919 20:02:24.419237   47579 main.go:141] libmachine: (multinode-282812-m02) DBG | domain multinode-282812-m02 has defined MAC address 52:54:00:3e:e6:1f in network mk-multinode-282812
	I0919 20:02:24.419663   47579 main.go:141] libmachine: (multinode-282812-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e6:1f", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 21:00:39 +0000 UTC Type:0 Mac:52:54:00:3e:e6:1f Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-282812-m02 Clientid:01:52:54:00:3e:e6:1f}
	I0919 20:02:24.419725   47579 main.go:141] libmachine: (multinode-282812-m02) DBG | domain multinode-282812-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:3e:e6:1f in network mk-multinode-282812
	I0919 20:02:24.419871   47579 host.go:66] Checking if "multinode-282812-m02" exists ...
	I0919 20:02:24.420190   47579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:02:24.420223   47579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:02:24.434729   47579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33077
	I0919 20:02:24.435187   47579 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:02:24.435589   47579 main.go:141] libmachine: Using API Version  1
	I0919 20:02:24.435616   47579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:02:24.435917   47579 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:02:24.436079   47579 main.go:141] libmachine: (multinode-282812-m02) Calling .DriverName
	I0919 20:02:24.436266   47579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 20:02:24.436285   47579 main.go:141] libmachine: (multinode-282812-m02) Calling .GetSSHHostname
	I0919 20:02:24.438538   47579 main.go:141] libmachine: (multinode-282812-m02) DBG | domain multinode-282812-m02 has defined MAC address 52:54:00:3e:e6:1f in network mk-multinode-282812
	I0919 20:02:24.438866   47579 main.go:141] libmachine: (multinode-282812-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e6:1f", ip: ""} in network mk-multinode-282812: {Iface:virbr1 ExpiryTime:2024-09-19 21:00:39 +0000 UTC Type:0 Mac:52:54:00:3e:e6:1f Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-282812-m02 Clientid:01:52:54:00:3e:e6:1f}
	I0919 20:02:24.438886   47579 main.go:141] libmachine: (multinode-282812-m02) DBG | domain multinode-282812-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:3e:e6:1f in network mk-multinode-282812
	I0919 20:02:24.439027   47579 main.go:141] libmachine: (multinode-282812-m02) Calling .GetSSHPort
	I0919 20:02:24.439171   47579 main.go:141] libmachine: (multinode-282812-m02) Calling .GetSSHKeyPath
	I0919 20:02:24.439306   47579 main.go:141] libmachine: (multinode-282812-m02) Calling .GetSSHUsername
	I0919 20:02:24.439462   47579 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19664-7917/.minikube/machines/multinode-282812-m02/id_rsa Username:docker}
	I0919 20:02:24.516334   47579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 20:02:24.530620   47579 status.go:176] multinode-282812-m02 status: &{Name:multinode-282812-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 20:02:24.530663   47579 status.go:174] checking status of multinode-282812-m03 ...
	I0919 20:02:24.530992   47579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 20:02:24.531034   47579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 20:02:24.547012   47579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I0919 20:02:24.547546   47579 main.go:141] libmachine: () Calling .GetVersion
	I0919 20:02:24.548077   47579 main.go:141] libmachine: Using API Version  1
	I0919 20:02:24.548102   47579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 20:02:24.548434   47579 main.go:141] libmachine: () Calling .GetMachineName
	I0919 20:02:24.548607   47579 main.go:141] libmachine: (multinode-282812-m03) Calling .GetState
	I0919 20:02:24.550265   47579 status.go:364] multinode-282812-m03 host status = "Stopped" (err=<nil>)
	I0919 20:02:24.550281   47579 status.go:377] host is not running, skipping remaining checks
	I0919 20:02:24.550288   47579 status.go:176] multinode-282812-m03 status: &{Name:multinode-282812-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-282812 node start m03 -v=7 --alsologtostderr: (37.831162046s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-282812 node delete m03: (1.60837262s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (193.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-282812 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0919 20:13:59.335260   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-282812 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m12.700629449s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-282812 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (193.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-282812
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-282812-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-282812-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.840955ms)

                                                
                                                
-- stdout --
	* [multinode-282812-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-282812-m02' is duplicated with machine name 'multinode-282812-m02' in profile 'multinode-282812'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-282812-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-282812-m03 --driver=kvm2  --container-runtime=crio: (41.938891543s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-282812
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-282812: exit status 80 (221.85973ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-282812 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-282812-m03 already exists in multinode-282812-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-282812-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.02s)

                                                
                                    
x
+
TestScheduledStopUnix (115.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-246931 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-246931 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.832441747s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-246931 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-246931 -n scheduled-stop-246931
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-246931 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0919 20:18:38.384447   15116 retry.go:31] will retry after 71.143µs: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.385578   15116 retry.go:31] will retry after 175.847µs: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.386718   15116 retry.go:31] will retry after 185.944µs: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.387821   15116 retry.go:31] will retry after 256.366µs: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.388931   15116 retry.go:31] will retry after 479.79µs: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.390049   15116 retry.go:31] will retry after 961.066µs: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.391173   15116 retry.go:31] will retry after 968.041µs: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.392308   15116 retry.go:31] will retry after 1.404151ms: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.394498   15116 retry.go:31] will retry after 2.317397ms: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.397695   15116 retry.go:31] will retry after 2.912843ms: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.400895   15116 retry.go:31] will retry after 6.112369ms: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.408101   15116 retry.go:31] will retry after 12.366362ms: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.421317   15116 retry.go:31] will retry after 10.170679ms: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.432547   15116 retry.go:31] will retry after 19.412497ms: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
I0919 20:18:38.452791   15116 retry.go:31] will retry after 19.112792ms: open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/scheduled-stop-246931/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-246931 --cancel-scheduled
E0919 20:18:42.409267   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:18:59.338076   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-246931 -n scheduled-stop-246931
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-246931
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-246931 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-246931
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-246931: exit status 7 (63.812891ms)

                                                
                                                
-- stdout --
	scheduled-stop-246931
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-246931 -n scheduled-stop-246931
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-246931 -n scheduled-stop-246931: exit status 7 (64.165173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-246931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-246931
--- PASS: TestScheduledStopUnix (115.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (242.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1361363688 start -p running-upgrade-070299 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1361363688 start -p running-upgrade-070299 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m20.536302968s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-070299 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-070299 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m38.712171369s)
helpers_test.go:175: Cleaning up "running-upgrade-070299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-070299
--- PASS: TestRunningBinaryUpgrade (242.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045748 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-045748 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.551261ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-045748] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-7917/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-7917/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045748 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045748 --driver=kvm2  --container-runtime=crio: (1m35.352130164s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-045748 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045748 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045748 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.844492054s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-045748 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-045748 status -o json: exit status 2 (268.015297ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-045748","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-045748
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-045748: (1.084466614s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045748 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045748 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.912399256s)
--- PASS: TestNoKubernetes/serial/Start (47.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-045748 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-045748 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.513167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.276275066s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-045748
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-045748: (1.291611806s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045748 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045748 --driver=kvm2  --container-runtime=crio: (44.535400824s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-045748 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-045748 "sudo systemctl is-active --quiet service kubelet": exit status 1 (189.473532ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestPause/serial/Start (79.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-670672 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-670672 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m19.903355404s)
--- PASS: TestPause/serial/Start (79.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (136.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3864497019 start -p stopped-upgrade-927381 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0919 20:23:59.337700   15116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-7917/.minikube/profiles/functional-454067/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3864497019 start -p stopped-upgrade-927381 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m9.497351027s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3864497019 -p stopped-upgrade-927381 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3864497019 -p stopped-upgrade-927381 stop: (2.127373582s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-927381 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-927381 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.03489441s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (136.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-927381
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    

Test skip (32/203)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard